亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

首頁 後端開發(fā) Python教學(xué) 我嘗試過花崗巖。

我嘗試過花崗巖。

Oct 28, 2024 am 04:23 AM

I tried out Granite .

花崗巖3.0

Granite 3.0 是一個(gè)開源、輕量級(jí)的生成語言模型系列,專為一系列企業(yè)級(jí)任務(wù)而設(shè)計(jì)。它原生支援多語言功能、編碼、推理和工具使用,適合企業(yè)環(huán)境。

我測(cè)試了運(yùn)行這個(gè)模型,看看它可以處理哪些任務(wù)。

環(huán)境設(shè)定

我在 Google Colab 中設(shè)定了 Granite 3.0 環(huán)境,並使用以下指令安裝了必要的函式庫:

!pip install torch torchvision torchaudio
!pip install accelerate
!pip install -U transformers

執(zhí)行

我測(cè)試了Granite 3.0的2B和8B型號(hào)的性能。

2B型號(hào)

我運(yùn)行了 2B 模型。這是 2B 模型的程式碼範(fàn)例:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
output = tokenizer.batch_decode(output)
print(output[0])

輸出

<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>1. IBM Research - Austin, Texas<|end_of_text|>

8B型號(hào)

將2b替換為8b即可使用8B模型。以下是 8B 模型的沒有角色和使用者輸入欄位的程式碼範(fàn)例:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

input_tokens = tokenizer(chat, add_special_tokens=False, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

輸出

1. IBM Almaden Research Center - San Jose, California

函數(shù)呼叫

我探索了函數(shù)呼叫功能,並使用虛擬函數(shù)對(duì)其進(jìn)行了測(cè)試。這裡,get_current_weather 被定義為傳回模擬天氣資料。

虛擬函數(shù)

import json

def get_current_weather(location: str) -> dict:
    """
    Retrieves current weather information for the specified location (default: San Francisco).
    Args:
        location (str): Name of the city to retrieve weather data for.
    Returns:
        dict: Dictionary containing weather information (temperature, description, humidity).
    """
    print(f"Getting current weather for {location}")

    try:
        weather_description = "sample"
        temperature = "20.0"
        humidity = "80.0"

        return {
            "description": weather_description,
            "temperature": temperature,
            "humidity": humidity
        }
    except Exception as e:
        print(f"Error fetching weather data: {e}")
        return {"weather": "NA"}

即時(shí)創(chuàng)作

我建立了一個(gè)呼叫函數(shù)的提示:

functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and country code, e.g. San Francisco, US",
                }
            },
            "required": ["location"],
        },
    },
]
query = "What's the weather like in Boston?"
payload = {
    "functions_str": [json.dumps(x) for x in functions]
}
chat = [
    {"role":"system","content": f"You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{payload}"},
    {"role": "user", "content": query }
]

響應(yīng)生成

使用以下程式碼,我產(chǎn)生了一個(gè)回應(yīng):

instruction_1 = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(instruction_1, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=1024)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

輸出

{'name': 'get_current_weather', 'arguments': {'location': 'Boston'}}

這證實(shí)了模型能夠根據(jù)指定城市產(chǎn)生正確的函數(shù)呼叫。

增強(qiáng)互動(dòng)流程的格式規(guī)範(fàn)

Granite 3.0 允許格式規(guī)格以促進(jìn)結(jié)構(gòu)化格式的回應(yīng)。本節(jié)說明如何使用 [UTTERANCE] 進(jìn)行回應(yīng),並使用 [THINK] 進(jìn)行內(nèi)心想法。

另一方面,由於函數(shù)呼叫以純文字形式輸出,因此可能需要實(shí)作單獨(dú)的機(jī)制來區(qū)分函數(shù)呼叫和常規(guī)文字回應(yīng)。

指定輸出格式

以下是指導(dǎo) AI 輸出的範(fàn)例提示:

prompt = """You are a conversational AI assistant that deepens interactions by alternating between responses and inner thoughts.
<Constraints>
* Record spoken responses after the [UTTERANCE] tag and inner thoughts after the [THINK] tag.
* Use [UTTERANCE] as a start marker to begin outputting an utterance.
* After [THINK], describe your internal reasoning or strategy for the next response. This may include insights on the user's reaction, adjustments to improve interaction, or further goals to deepen the conversation.
* Important: **Use [UTTERANCE] and [THINK] as a start signal without needing a closing tag.**
</Constraints>

Follow these instructions, alternating between [UTTERANCE] and [THINK] formats for responses.
<output example>
example1:
  [UTTERANCE]Hello! How can I assist you today?[THINK]I’ll start with a neutral tone to understand their needs. Preparing to offer specific suggestions based on their response.[UTTERANCE]Thank you! In that case, I have a few methods I can suggest![THINK]Since I now know what they’re looking for, I'll move on to specific suggestions, maintaining a friendly and approachable tone.
...
</output example>

Please respond to the following user_input.
<user_input>
Hello! What can you do?
</user_input>
"""

執(zhí)行程式碼範(fàn)例

產(chǎn)生回應(yīng)的程式碼:

chat = [
    { "role": "user", "content": prompt },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=1024)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

範(fàn)例輸出

輸出如下:

[UTTERANCE]Hello! I'm here to provide information, answer questions, and assist with various tasks. I can help with a wide range of topics, from general knowledge to specific queries. How can I assist you today?
[THINK]I've introduced my capabilities and offered assistance, setting the stage for the user to share their needs or ask questions.

[UTTERANCE] 和 [THINK] 標(biāo)籤已成功使用,允許有效的回應(yīng)格式。

根據(jù)提示的不同,輸出中有時(shí)可能會(huì)出現(xiàn)結(jié)束標(biāo)籤(例如[/UTTERANCE]或[/THINK]),但總的來說,一般都可以成功指定輸出格式。

串流程式碼範(fàn)例

讓我們看看如何輸出流響應(yīng)。

以下程式碼使用 asyncio 和線程庫來非同步傳輸來自 Granite 3.0 的回應(yīng)。

!pip install torch torchvision torchaudio
!pip install accelerate
!pip install -U transformers

範(fàn)例輸出

執(zhí)行上述程式碼將產(chǎn)生以下格式的非同步回應(yīng):

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
output = tokenizer.batch_decode(output)
print(output[0])

此範(fàn)例示範(fàn)了成功的串流。每個(gè)token都是非同步生成並順序顯示,讓使用者可以即時(shí)查看生成過程。

概括

Granite 3.0 即使使用 8B 機(jī)型也能提供相當(dāng)強(qiáng)的反應(yīng)。函數(shù)呼叫和格式規(guī)範(fàn)功能也運(yùn)作良好,顯示其具有廣泛的應(yīng)用潛力。

以上是我嘗試過花崗巖。的詳細(xì)內(nèi)容。更多資訊請(qǐng)關(guān)注PHP中文網(wǎng)其他相關(guān)文章!

本網(wǎng)站聲明
本文內(nèi)容由網(wǎng)友自願(yuàn)投稿,版權(quán)歸原作者所有。本站不承擔(dān)相應(yīng)的法律責(zé)任。如發(fā)現(xiàn)涉嫌抄襲或侵權(quán)的內(nèi)容,請(qǐng)聯(lián)絡(luò)admin@php.cn

熱AI工具

Undress AI Tool

Undress AI Tool

免費(fèi)脫衣圖片

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅(qū)動(dòng)的應(yīng)用程序,用於創(chuàng)建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Clothoff.io

Clothoff.io

AI脫衣器

Video Face Swap

Video Face Swap

使用我們完全免費(fèi)的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱工具

記事本++7.3.1

記事本++7.3.1

好用且免費(fèi)的程式碼編輯器

SublimeText3漢化版

SublimeText3漢化版

中文版,非常好用

禪工作室 13.0.1

禪工作室 13.0.1

強(qiáng)大的PHP整合開發(fā)環(huán)境

Dreamweaver CS6

Dreamweaver CS6

視覺化網(wǎng)頁開發(fā)工具

SublimeText3 Mac版

SublimeText3 Mac版

神級(jí)程式碼編輯軟體(SublimeText3)

熱門話題

Laravel 教程
1597
29
PHP教程
1488
72
如何處理Python中的API身份驗(yàn)證 如何處理Python中的API身份驗(yàn)證 Jul 13, 2025 am 02:22 AM

處理API認(rèn)證的關(guān)鍵在於理解並正確使用認(rèn)證方式。 1.APIKey是最簡單的認(rèn)證方式,通常放在請(qǐng)求頭或URL參數(shù)中;2.BasicAuth使用用戶名和密碼進(jìn)行Base64編碼傳輸,適合內(nèi)部系統(tǒng);3.OAuth2需先通過client_id和client_secret獲取Token,再在請(qǐng)求頭中帶上BearerToken;4.為應(yīng)對(duì)Token過期,可封裝Token管理類自動(dòng)刷新Token;總之,根據(jù)文檔選擇合適方式,並安全存儲(chǔ)密鑰信息是關(guān)鍵。

解釋Python斷言。 解釋Python斷言。 Jul 07, 2025 am 12:14 AM

Assert是Python用於調(diào)試的斷言工具,當(dāng)條件不滿足時(shí)拋出AssertionError。其語法為assert條件加可選錯(cuò)誤信息,適用於內(nèi)部邏輯驗(yàn)證如參數(shù)檢查、狀態(tài)確認(rèn)等,但不能用於安全或用戶輸入檢查,且應(yīng)配合清晰提示信息使用,僅限開發(fā)階段輔助調(diào)試而非替代異常處理。

什麼是Python迭代器? 什麼是Python迭代器? Jul 08, 2025 am 02:56 AM

Inpython,IteratorSareObjectSthallowloopingThroughCollectionsByImplementing_iter __()和__next __()。 1)iteratorsWiaTheIteratorProtocol,使用__ITER __()toreTurnterateratoratoranteratoratoranteratoratorAnterAnteratoratorant antheittheext__()

什麼是Python型提示? 什麼是Python型提示? Jul 07, 2025 am 02:55 AM

typeHintsInpyThonsolverbromblemboyofambiguityandPotentialBugSindyNamalytyCodeByallowingDevelopsosteSpecefectifyExpectedTypes.theyenhancereadability,enablellybugdetection,andimprovetool.typehintsupport.typehintsareadsareadsareadsareadsareadsareadsareadsareadsareaddedusidocolon(

如何一次迭代兩個(gè)列表 如何一次迭代兩個(gè)列表 Jul 09, 2025 am 01:13 AM

在Python中同時(shí)遍歷兩個(gè)列表的常用方法是使用zip()函數(shù),它會(huì)按順序配對(duì)多個(gè)列表並以最短為準(zhǔn);若列表長度不一致,可使用itertools.zip_longest()以最長為準(zhǔn)並填充缺失值;結(jié)合enumerate()可同時(shí)獲取索引。 1.zip()簡潔實(shí)用,適合成對(duì)數(shù)據(jù)迭代;2.zip_longest()處理不一致長度時(shí)可填充默認(rèn)值;3.enumerate(zip())可在遍歷時(shí)獲取索引,滿足多種複雜場(chǎng)景需求。

Python Fastapi教程 Python Fastapi教程 Jul 12, 2025 am 02:42 AM

要使用Python創(chuàng)建現(xiàn)代高效的API,推薦使用FastAPI;其基於標(biāo)準(zhǔn)Python類型提示,可自動(dòng)生成文檔,性能優(yōu)越。安裝FastAPI和ASGI服務(wù)器uvicorn後,即可編寫接口代碼。通過定義路由、編寫處理函數(shù)並返回?cái)?shù)據(jù),可以快速構(gòu)建API。 FastAPI支持多種HTTP方法,並提供自動(dòng)生成的SwaggerUI和ReDoc文檔系統(tǒng)。 URL參數(shù)可通過路徑定義捕獲,查詢參數(shù)則通過函數(shù)參數(shù)設(shè)置默認(rèn)值實(shí)現(xiàn)。合理使用Pydantic模型有助於提升開發(fā)效率和準(zhǔn)確性。

如何用Python測(cè)試API 如何用Python測(cè)試API Jul 12, 2025 am 02:47 AM

要測(cè)試API需使用Python的Requests庫,步驟為安裝庫、發(fā)送請(qǐng)求、驗(yàn)證響應(yīng)、設(shè)置超時(shí)與重試。首先通過pipinstallrequests安裝庫;接著用requests.get()或requests.post()等方法發(fā)送GET或POST請(qǐng)求;然後檢查response.status_code和response.json()確保返回結(jié)果符合預(yù)期;最後可添加timeout參數(shù)設(shè)置超時(shí)時(shí)間,並結(jié)合retrying庫實(shí)現(xiàn)自動(dòng)重試以增強(qiáng)穩(wěn)定性。

設(shè)置並使用Python虛擬環(huán)境 設(shè)置並使用Python虛擬環(huán)境 Jul 06, 2025 am 02:56 AM

虛擬環(huán)境能隔離不同項(xiàng)目的依賴。使用Python自帶的venv模塊創(chuàng)建,命令為python-mvenvenv;激活方式:Windows用env\Scripts\activate,macOS/Linux用sourceenv/bin/activate;安裝包使用pipinstall,生成需求文件用pipfreeze>requirements.txt,恢復(fù)環(huán)境用pipinstall-rrequirements.txt;注意事項(xiàng)包括不提交到Git、每次新開終端需重新激活、可用IDE自動(dòng)識(shí)別切換。

See all articles