亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Backend Development Python Tutorial I tried out Granite .

I tried out Granite .

Oct 28, 2024 am 04:23 AM

I tried out Granite .

Granite 3.0

Granite 3.0 is an open-source, lightweight family of generative language models designed for a range of enterprise-level tasks. It natively supports multi-language functionality, coding, reasoning, and tool usage, making it suitable for enterprise environments.

I tested running this model to see what tasks it can handle.

Environment Setup

I set up the Granite 3.0 environment in Google Colab and installed the necessary libraries using the following commands:

!pip install torch torchvision torchaudio
!pip install accelerate
!pip install -U transformers

Execution

I tested the performance of both the 2B and 8B models of Granite 3.0.

2B Model

I ran the 2B model. Here’s the code sample for the 2B model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
output = tokenizer.batch_decode(output)
print(output[0])

Output

<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>1. IBM Research - Austin, Texas<|end_of_text|>

8B Model

The 8B model can be used by replacing 2b with 8b. Here’s a code sample without role and user input fields for the 8B model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-8b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

input_tokens = tokenizer(chat, add_special_tokens=False, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

Output

1. IBM Almaden Research Center - San Jose, California

Function Calling

I explored the Function Calling feature, testing it with a dummy function. Here, get_current_weather is defined to return mock weather data.

Dummy Function

import json

def get_current_weather(location: str) -> dict:
    """
    Retrieves current weather information for the specified location (default: San Francisco).
    Args:
        location (str): Name of the city to retrieve weather data for.
    Returns:
        dict: Dictionary containing weather information (temperature, description, humidity).
    """
    print(f"Getting current weather for {location}")

    try:
        weather_description = "sample"
        temperature = "20.0"
        humidity = "80.0"

        return {
            "description": weather_description,
            "temperature": temperature,
            "humidity": humidity
        }
    except Exception as e:
        print(f"Error fetching weather data: {e}")
        return {"weather": "NA"}

Prompt Creation

I created a prompt to call the function:

functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and country code, e.g. San Francisco, US",
                }
            },
            "required": ["location"],
        },
    },
]
query = "What's the weather like in Boston?"
payload = {
    "functions_str": [json.dumps(x) for x in functions]
}
chat = [
    {"role":"system","content": f"You are a helpful assistant with access to the following function calls. Your task is to produce a sequence of function calls necessary to generate response to the user utterance. Use the following function calls as required.{payload}"},
    {"role": "user", "content": query }
]

Response Generation

Using the following code, I generated a response:

instruction_1 = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(instruction_1, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=1024)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

Output

{'name': 'get_current_weather', 'arguments': {'location': 'Boston'}}

This confirmed the model’s ability to generate the correct function call based on the specified city.

Format Specification for Enhanced Interaction Flow

Granite 3.0 allows format specification to facilitate responses in structured formats. This section explains using [UTTERANCE] for responses and [THINK] for inner thoughts.

On the other hand, since function calling is output as plain text, it may be necessary to implement a separate mechanism to distinguish between function calls and regular text responses.

Specifying Output Format

Here’s a sample prompt for guiding the AI’s output:

prompt = """You are a conversational AI assistant that deepens interactions by alternating between responses and inner thoughts.
<Constraints>
* Record spoken responses after the [UTTERANCE] tag and inner thoughts after the [THINK] tag.
* Use [UTTERANCE] as a start marker to begin outputting an utterance.
* After [THINK], describe your internal reasoning or strategy for the next response. This may include insights on the user's reaction, adjustments to improve interaction, or further goals to deepen the conversation.
* Important: **Use [UTTERANCE] and [THINK] as a start signal without needing a closing tag.**
</Constraints>

Follow these instructions, alternating between [UTTERANCE] and [THINK] formats for responses.
<output example>
example1:
  [UTTERANCE]Hello! How can I assist you today?[THINK]I’ll start with a neutral tone to understand their needs. Preparing to offer specific suggestions based on their response.[UTTERANCE]Thank you! In that case, I have a few methods I can suggest![THINK]Since I now know what they’re looking for, I'll move on to specific suggestions, maintaining a friendly and approachable tone.
...
</output example>

Please respond to the following user_input.
<user_input>
Hello! What can you do?
</user_input>
"""

Execution Code Example

the code to generate a response:

chat = [
    { "role": "user", "content": prompt },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=1024)
generated_text = tokenizer.decode(output[0][input_tokens["input_ids"].shape[1]:], skip_special_tokens=True)
print(generated_text)

Example Output

The output is as follows:

[UTTERANCE]Hello! I'm here to provide information, answer questions, and assist with various tasks. I can help with a wide range of topics, from general knowledge to specific queries. How can I assist you today?
[THINK]I've introduced my capabilities and offered assistance, setting the stage for the user to share their needs or ask questions.

The [UTTERANCE] and [THINK] tags were successfully used, allowing effective response formatting.

Depending on prompt, closing tags (such as [/UTTERANCE] or [/THINK]) may sometimes appear in the output, but overall, the output format can generally be specified successfully.

Streaming Code Example

Let’s also look at how to output streaming responses.

The following code uses the asyncio and threading libraries to asynchronously stream responses from Granite 3.0.

!pip install torch torchvision torchaudio
!pip install accelerate
!pip install -U transformers

Example Output

Running the above code will generate asynchronous responses in the following format:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "auto"
model_path = "ibm-granite/granite-3.0-2b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt").to("cuda")
output = model.generate(**input_tokens, max_new_tokens=100)
output = tokenizer.batch_decode(output)
print(output[0])

This example demonstrates successful streaming. Each token is generated asynchronously and displayed sequentially, allowing users to view the generation process in real time.

Summary

Granite 3.0 provides reasonably strong responses even with the 8B model. The Function Calling and Format Specification features also work quite well, indicating its potential for a wide range of applications.

The above is the detailed content of I tried out Granite .. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Polymorphism in python classes Polymorphism in python classes Jul 05, 2025 am 02:58 AM

Polymorphism is a core concept in Python object-oriented programming, referring to "one interface, multiple implementations", allowing for unified processing of different types of objects. 1. Polymorphism is implemented through method rewriting. Subclasses can redefine parent class methods. For example, the spoke() method of Animal class has different implementations in Dog and Cat subclasses. 2. The practical uses of polymorphism include simplifying the code structure and enhancing scalability, such as calling the draw() method uniformly in the graphical drawing program, or handling the common behavior of different characters in game development. 3. Python implementation polymorphism needs to satisfy: the parent class defines a method, and the child class overrides the method, but does not require inheritance of the same parent class. As long as the object implements the same method, this is called the "duck type". 4. Things to note include the maintenance

Python Function Arguments and Parameters Python Function Arguments and Parameters Jul 04, 2025 am 03:26 AM

Parameters are placeholders when defining a function, while arguments are specific values ??passed in when calling. 1. Position parameters need to be passed in order, and incorrect order will lead to errors in the result; 2. Keyword parameters are specified by parameter names, which can change the order and improve readability; 3. Default parameter values ??are assigned when defined to avoid duplicate code, but variable objects should be avoided as default values; 4. args and *kwargs can handle uncertain number of parameters and are suitable for general interfaces or decorators, but should be used with caution to maintain readability.

Explain Python generators and iterators. Explain Python generators and iterators. Jul 05, 2025 am 02:55 AM

Iterators are objects that implement __iter__() and __next__() methods. The generator is a simplified version of iterators, which automatically implement these methods through the yield keyword. 1. The iterator returns an element every time he calls next() and throws a StopIteration exception when there are no more elements. 2. The generator uses function definition to generate data on demand, saving memory and supporting infinite sequences. 3. Use iterators when processing existing sets, use a generator when dynamically generating big data or lazy evaluation, such as loading line by line when reading large files. Note: Iterable objects such as lists are not iterators. They need to be recreated after the iterator reaches its end, and the generator can only traverse it once.

Python `@classmethod` decorator explained Python `@classmethod` decorator explained Jul 04, 2025 am 03:26 AM

A class method is a method defined in Python through the @classmethod decorator. Its first parameter is the class itself (cls), which is used to access or modify the class state. It can be called through a class or instance, which affects the entire class rather than a specific instance; for example, in the Person class, the show_count() method counts the number of objects created; when defining a class method, you need to use the @classmethod decorator and name the first parameter cls, such as the change_var(new_value) method to modify class variables; the class method is different from the instance method (self parameter) and static method (no automatic parameters), and is suitable for factory methods, alternative constructors, and management of class variables. Common uses include:

How to handle API authentication in Python How to handle API authentication in Python Jul 13, 2025 am 02:22 AM

The key to dealing with API authentication is to understand and use the authentication method correctly. 1. APIKey is the simplest authentication method, usually placed in the request header or URL parameters; 2. BasicAuth uses username and password for Base64 encoding transmission, which is suitable for internal systems; 3. OAuth2 needs to obtain the token first through client_id and client_secret, and then bring the BearerToken in the request header; 4. In order to deal with the token expiration, the token management class can be encapsulated and automatically refreshed the token; in short, selecting the appropriate method according to the document and safely storing the key information is the key.

What are Python magic methods or dunder methods? What are Python magic methods or dunder methods? Jul 04, 2025 am 03:20 AM

Python's magicmethods (or dunder methods) are special methods used to define the behavior of objects, which start and end with a double underscore. 1. They enable objects to respond to built-in operations, such as addition, comparison, string representation, etc.; 2. Common use cases include object initialization and representation (__init__, __repr__, __str__), arithmetic operations (__add__, __sub__, __mul__) and comparison operations (__eq__, ___lt__); 3. When using it, make sure that their behavior meets expectations. For example, __repr__ should return expressions of refactorable objects, and arithmetic methods should return new instances; 4. Overuse or confusing things should be avoided.

How does Python memory management work? How does Python memory management work? Jul 04, 2025 am 03:26 AM

Pythonmanagesmemoryautomaticallyusingreferencecountingandagarbagecollector.Referencecountingtrackshowmanyvariablesrefertoanobject,andwhenthecountreacheszero,thememoryisfreed.However,itcannothandlecircularreferences,wheretwoobjectsrefertoeachotherbuta

Python `@property` decorator Python `@property` decorator Jul 04, 2025 am 03:28 AM

@property is a decorator in Python used to masquerade methods as properties, allowing logical judgments or dynamic calculation of values ??when accessing properties. 1. It defines the getter method through the @property decorator, so that the outside calls the method like accessing attributes; 2. It can control the assignment behavior with .setter, such as the validity of the check value, if the .setter is not defined, it is read-only attribute; 3. It is suitable for scenes such as property assignment verification, dynamic generation of attribute values, and hiding internal implementation details; 4. When using it, please note that the attribute name is different from the private variable name to avoid dead loops, and is suitable for lightweight operations; 5. In the example, the Circle class restricts radius non-negative, and the Person class dynamically generates full_name attribute

See all articles