亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

? ??? ?? ??? ???? ???? ??????? ??? ?? ?? ??

???? ??????? ??? ?? ?? ??

Jan 07, 2025 am 06:24 AM

? ?? ?????? ??????? LLM? ???? ???? ??? ?? ?? ???? ??? ??? ?????.
????? Hugging Face? ChatGPT2 ??? ????? ChatGPT4, Claude ?? ??? ?? ??? ?? ??? ? ????.
AI ??? ?? ??? ??????? ???? ?? AI ???? ???? ? ???? ??? LLM ??? ???? ? ???? ??? ? ????.

LLM ?? ?? ?? ??

?? ??? ???? ?? ???? LLM ??? ???? ? ??? ?? ???? ???????. ????? ?? ?? ??? LLM ??? ??? ? ???? ? ??? API ??? ??? ???, ??, ??? ?? ???? ???. ???? ??????? ??? ????? ??? ??, ?? ??, ?? ?? ???? ?? ??? ???? ???.
??? ?? ?? ??? ??? ????.

  • ??? ???? ???? ??? API ?????
  • ??? ??? ????? ??? ?? ???
  • ??? ???? ?? ???
  • ???? ?? ?? ? ????
  • ?? ??????? ??? ??? ?? ???

?? ??

??? ???? ?? ?? ??? ?????.

  • ???? Python 3.8 ??? ???? ??? ???
  • Redis ???? ?? ?? ??? ??
  • ?? Python ????? ??
  • REST API? ?? ?? ??
  • Hugging Face API ?(?? ?? LLM ???? ?)

???? ?????? ?? ??? GitHub ????? ??? ? ????.

?? ?? ??

?? ??? ???? ??? ???????. ??? ???? ??? ??? ??? ???? ?? ???????.

?? ???? ????? ??? Python ?? ??? ??? ?????. ???? ?? ??? ?????.

mkdir llm_integration && cd llm_integration
python3 -m venv env
syource env/bin/activate

?? ???? ???? ??? ?????. ?? ?? ???? ? ?? ??.txt ??? ????.

transformers==4.36.0
huggingface-hub==0.19.4
redis==4.6.0
pydantic==2.5.0
pydantic-settings==2.1.0
tenacity==8.2.3
python-dotenv==1.0.0
fastapi==0.104.1
uvicorn==0.24.0
torch==2.1.0
numpy==1.24.3

? ???? ??? ??? ??? ?????.

  • Transformers: ??? Qwen2.5-Coder ???? ?????? ??? Hugging Face? ??? ????????.
  • Huggingface-hub: ?? ?? ? ?? ??? ???? ???. redis: ?? ?? ???
  • pydantic: ??? ?? ? ??? ?????.
  • ???: ??? ??? ?? ??? ??? ?????.
  • python-dotenv: ?? ?? ???
  • fastapi: ?? ?? ??? API ????? ??
  • uvicorn: FastAPI ??????? ?? ????? ???? ? ?????
  • torch: ??? ?? ?? ? ?? ?? ?? ???
  • numpy: ?? ??? ?????.

?? ??? ???? ?? ???? ?????.

mkdir llm_integration && cd llm_integration
python3 -m venv env
syource env/bin/activate

??? ??? ????? ??? ???. ???? ????? ?? ????? ??? ????.

transformers==4.36.0
huggingface-hub==0.19.4
redis==4.6.0
pydantic==2.5.0
pydantic-settings==2.1.0
tenacity==8.2.3
python-dotenv==1.0.0
fastapi==0.104.1
uvicorn==0.24.0
torch==2.1.0
numpy==1.24.3

LLM ????? ??

???? ?? ??? ?? ??? LLM ??????? ???????. ???? ChatGPT ??(?? ???? ?? LLM)? ?? ??? ????. core/llm_client.py ??? ?? ?? ??? ?????.

pip install -r requirements.txt

LLMClient ???? ? ?? ????? ??? ?? ??? ?????.

  • ??? ?????? AutoModelForCausalLM ? AutoTokenizer? ???? ??? ???? ????
  • device_map="auto" ????? GPU/CPU ??? ???? ?????
  • ?? ??? ????? ??? ??? ????? ?? torch.float16? ???? ????

?? ??? ???? ???? ??? ?????.

llm_integration/
├── core/
│   ├── llm_client.py      # your main LLM interaction code
│   ├── prompt_manager.py  # Handles prompt templates
│   └── response_handler.py # Processes LLM responses
├── cache/
│   └── redis_manager.py   # Manages your caching system
├── config/
│   └── settings.py        # Configuration management
├── api/
│   └── routes.py          # API endpoints
├── utils/
│   ├── monitoring.py      # Usage tracking
│   └── rate_limiter.py    # Rate limiting logic
├── requirements.txt
└── main.py
└── usage_logs.json       

? ?? ???? ?? ?? ????? ???????.

  • ???? ??? ???? ?? @retry ????? ???? ??????.
  • torch.no_grad() ???? ???? ???? ????? ??? ?????? ???? ??????.
  • ?? ??? ?? ??? ?? ? ?? ???? ?? ???? ?????.
  • ?? ? ?? ??? ??? ???? ??? ?????.

LLM ?? ??? ???

???? LLM? ?? ??? ?? ???? ????? ?? ?? ???? ???? ???. ?? ?? ??? ???? core/response_handler.py ???? ?? ?????.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from tenacity import retry, stop_after_attempt, wait_exponential
from typing import Dict, Optional
import logging

class LLMClient:
    def __init__(self, model_name: str = "gpt2", timeout: int = 30):
        try:
            self.tokenizer = AutoTokenizer.from_pretrained(model_name)
            self.model = AutoModelForCausalLM.from_pretrained(
                model_name,
                device_map="auto",
                torch_dtype=torch.float16
            )
        except Exception as e:
            logging.error(f"Error loading model: {str(e)}")
            # Fallback to a simpler model if the specified one fails
            self.tokenizer = AutoTokenizer.from_pretrained("gpt2")
            self.model = AutoModelForCausalLM.from_pretrained("gpt2")

        self.timeout = timeout
        self.logger = logging.getLogger(__name__)

??? ?? ??? ??

?? ?????? ??? ???? ??? ??? ? ?? ?? ???? ??? ?????. Cache/redis_manager.py ??? ?? ?? ??? ?????.

 @retry(
        stop=stop_after_attempt(3),
        wait=wait_exponential(multiplier=1, min=4, max=10),
        reraise=True
    )
    async def complete(self, 
                      prompt: str, 
                      temperature: float = 0.7,
                      max_tokens: Optional[int] = None) -> Dict:
        """Get completion from the model with automatic retries"""
        try:
            inputs = self.tokenizer(prompt, return_tensors="pt").to(
                self.model.device
            )

            with torch.no_grad():
                outputs = self.model.generate(
                    **inputs,
                    max_new_tokens=max_tokens or 100,
                    temperature=temperature,
                    do_sample=True
                )

            response_text = self.tokenizer.decode(
                outputs[0], 
                skip_special_tokens=True
            )

            # Calculate token usage for monitoring
            input_tokens = len(inputs.input_ids[0])
            output_tokens = len(outputs[0]) - input_tokens

            return {
                'content': response_text,
                'usage': {
                    'prompt_tokens': input_tokens,
                    'completion_tokens': output_tokens,
                    'total_tokens': input_tokens + output_tokens
                },
                'model': "gpt2"
            }

        except Exception as e:
            self.logger.error(f"Error in LLM completion: {str(e)}")
            raise

? ?? ????? ??? ???? ?? ?? ??? ???? CacheManager ???? ??????.

  • ????? ????? ???? ??? ?? ?? ???? _generate_key ???
  • ??? ????? ?? ??? ??? ??? ???? get_cached_response
  • ??? ??? ? ??? ???? ??? ???? ??_??

??? ???? ??? ??

LLM ??? ????? ??? ???? ???? ??? ?????. core/prompt_manager.py? ?? ??? ?????.

mkdir llm_integration && cd llm_integration
python3 -m venv env
syource env/bin/activate

?? ?? ?? ??? ???? ????/content_moderation.json ???? ??? ??? ?? ?? ???? ???? ????.

transformers==4.36.0
huggingface-hub==0.19.4
redis==4.6.0
pydantic==2.5.0
pydantic-settings==2.1.0
tenacity==8.2.3
python-dotenv==1.0.0
fastapi==0.104.1
uvicorn==0.24.0
torch==2.1.0
numpy==1.24.3

?? ???? ???? JSON ???? ???? ???? ???? ??? ??? ???? ???? ?? ? ????.

?? ??? ??

?? LLM ??? ? ?? ???? ?????? ???? ?? ???? ? ??? ?? ??? ??? ?????. config/settings.py ??? ?? ??? ?????:

pip install -r requirements.txt

?? ?? ??

???? ???? ?????? ???? ????? ??? ???? ?? ?? ??? ??? ?????. ??? ??? utils/rate_limiter.py ??? ?? ??? ?????:

llm_integration/
├── core/
│   ├── llm_client.py      # your main LLM interaction code
│   ├── prompt_manager.py  # Handles prompt templates
│   └── response_handler.py # Processes LLM responses
├── cache/
│   └── redis_manager.py   # Manages your caching system
├── config/
│   └── settings.py        # Configuration management
├── api/
│   └── routes.py          # API endpoints
├── utils/
│   ├── monitoring.py      # Usage tracking
│   └── rate_limiter.py    # Rate limiting logic
├── requirements.txt
└── main.py
└── usage_logs.json       

RateLimiter??? ?? ?? ?? ? ????? ???? ?? ??? ??? ???? ?? ??? ???? ?? ?? ???? ??? ? ?? ??? ??? check_rate_limit ???? ??????.

API ????? ??

?? api/routes.py ??? API ?????? ???? LLM? ??????? ??? ?????.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from tenacity import retry, stop_after_attempt, wait_exponential
from typing import Dict, Optional
import logging

class LLMClient:
    def __init__(self, model_name: str = "gpt2", timeout: int = 30):
        try:
            self.tokenizer = AutoTokenizer.from_pretrained(model_name)
            self.model = AutoModelForCausalLM.from_pretrained(
                model_name,
                device_map="auto",
                torch_dtype=torch.float16
            )
        except Exception as e:
            logging.error(f"Error loading model: {str(e)}")
            # Fallback to a simpler model if the specified one fails
            self.tokenizer = AutoTokenizer.from_pretrained("gpt2")
            self.model = AutoModelForCausalLM.from_pretrained("gpt2")

        self.timeout = timeout
        self.logger = logging.getLogger(__name__)

???? API ?? ??? ???? APIRouter ???? /moderate ?????? ??????. @lru_cache ?????? ??? ?? ??(get_llm_client, get_response_handler, get_cache_manager ? get_prompt_manager)? ???? LLMClient, CacheManager ? PromptManager? ????? ? ?? ??? ?? ????? ???. @router.post? ???mode_content ??? ??? ??? ?? POST ??? ???? FastAPI? ?? ????? ???? ??? ???? ?????. ?? ??? ??? ?? ?? ???? ??? RateLimiter ???? ?? ??? ?????.

????? ?? ?? ??? ??? ?? main.py? ????? ?????.

 @retry(
        stop=stop_after_attempt(3),
        wait=wait_exponential(multiplier=1, min=4, max=10),
        reraise=True
    )
    async def complete(self, 
                      prompt: str, 
                      temperature: float = 0.7,
                      max_tokens: Optional[int] = None) -> Dict:
        """Get completion from the model with automatic retries"""
        try:
            inputs = self.tokenizer(prompt, return_tensors="pt").to(
                self.model.device
            )

            with torch.no_grad():
                outputs = self.model.generate(
                    **inputs,
                    max_new_tokens=max_tokens or 100,
                    temperature=temperature,
                    do_sample=True
                )

            response_text = self.tokenizer.decode(
                outputs[0], 
                skip_special_tokens=True
            )

            # Calculate token usage for monitoring
            input_tokens = len(inputs.input_ids[0])
            output_tokens = len(outputs[0]) - input_tokens

            return {
                'content': response_text,
                'usage': {
                    'prompt_tokens': input_tokens,
                    'completion_tokens': output_tokens,
                    'total_tokens': input_tokens + output_tokens
                },
                'model': "gpt2"
            }

        except Exception as e:
            self.logger.error(f"Error in LLM completion: {str(e)}")
            raise

? ????? /api/v1 ??? ??? api.routes? ???? FastAPI ?? ???? ??????. ?????? ?? ?? ???? ???? ??? ???????. ?? ? ???? ???? Uvicorn? ???? localhost:8000? ?????.

?????? ??

?? ?? ?? ??? ??????? ??????? ???? ??? ?????. ?? ???? ?? ????? .env ??? ???? HUGGINGFACE_API_KEY ? REDIS_URL? ?????:

mkdir llm_integration && cd llm_integration
python3 -m venv env
syource env/bin/activate

?? ?? Redis? ????? ???? ??? ?????. ???? Unix ?? ?????? ?? ??? ???? ??? ? ????.

transformers==4.36.0
huggingface-hub==0.19.4
redis==4.6.0
pydantic==2.5.0
pydantic-settings==2.1.0
tenacity==8.2.3
python-dotenv==1.0.0
fastapi==0.104.1
uvicorn==0.24.0
torch==2.1.0
numpy==1.24.3

?? ??? ??? ? ????.

pip install -r requirements.txt

FastAPI ??? http://localhost:8000?? ???? ?????. ?? API ??? http://localhost:8000/docs?? ??? ? ????. ?? ?????? ????? ? ?? ?????!

Integrating Large Language Models in Production Applications

Content Moderation API ???

?? ???? ?? ??? API? ???? ?????. ? ???? ?? ?? ? ??? ?????.

llm_integration/
├── core/
│   ├── llm_client.py      # your main LLM interaction code
│   ├── prompt_manager.py  # Handles prompt templates
│   └── response_handler.py # Processes LLM responses
├── cache/
│   └── redis_manager.py   # Manages your caching system
├── config/
│   └── settings.py        # Configuration management
├── api/
│   └── routes.py          # API endpoints
├── utils/
│   ├── monitoring.py      # Usage tracking
│   └── rate_limiter.py    # Rate limiting logic
├── requirements.txt
└── main.py
└── usage_logs.json       

???? ??? ?? ??? ?????.

Integrating Large Language Models in Production Applications

???? ? ?? ??

?? ??????? ??? ??? ???? ???? ? ?? ???? ??? ??? ?????. utils/monitoring.py ??? ?? ??? ?????.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from tenacity import retry, stop_after_attempt, wait_exponential
from typing import Dict, Optional
import logging

class LLMClient:
    def __init__(self, model_name: str = "gpt2", timeout: int = 30):
        try:
            self.tokenizer = AutoTokenizer.from_pretrained(model_name)
            self.model = AutoModelForCausalLM.from_pretrained(
                model_name,
                device_map="auto",
                torch_dtype=torch.float16
            )
        except Exception as e:
            logging.error(f"Error loading model: {str(e)}")
            # Fallback to a simpler model if the specified one fails
            self.tokenizer = AutoTokenizer.from_pretrained("gpt2")
            self.model = AutoModelForCausalLM.from_pretrained("gpt2")

        self.timeout = timeout
        self.logger = logging.getLogger(__name__)

UsageMonitor ???? ?? ??? ?????.

  • ?????? ?? API ?? ??
  • ?? ????? ?? ?? ??? ??
  • ?? ?? ??
  • ?? ?? ???? ?? ??? ??(??????? ????? ???? ?? ?? ??????? ??)

???? ?? ??? ???? ??? ??? ?????.

 @retry(
        stop=stop_after_attempt(3),
        wait=wait_exponential(multiplier=1, min=4, max=10),
        reraise=True
    )
    async def complete(self, 
                      prompt: str, 
                      temperature: float = 0.7,
                      max_tokens: Optional[int] = None) -> Dict:
        """Get completion from the model with automatic retries"""
        try:
            inputs = self.tokenizer(prompt, return_tensors="pt").to(
                self.model.device
            )

            with torch.no_grad():
                outputs = self.model.generate(
                    **inputs,
                    max_new_tokens=max_tokens or 100,
                    temperature=temperature,
                    do_sample=True
                )

            response_text = self.tokenizer.decode(
                outputs[0], 
                skip_special_tokens=True
            )

            # Calculate token usage for monitoring
            input_tokens = len(inputs.input_ids[0])
            output_tokens = len(outputs[0]) - input_tokens

            return {
                'content': response_text,
                'usage': {
                    'prompt_tokens': input_tokens,
                    'completion_tokens': output_tokens,
                    'total_tokens': input_tokens + output_tokens
                },
                'model': "gpt2"
            }

        except Exception as e:
            self.logger.error(f"Error in LLM completion: {str(e)}")
            raise

API? ?????? UsageMonitor ???? ???? ??? ?????.

from typing import Dict
import logging

class ResponseHandler:
    def __init__(self):
        self.logger = logging.getLogger(__name__)

    def parse_moderation_response(self, raw_response: str) -> Dict:
        """Parse and structure the raw LLM response for moderation"""
        try:
            # Default response structure
            structured_response = {
                "is_appropriate": True,
                "confidence_score": 0.0,
                "reason": None
            }

            # Simple keyword-based analysis
            lower_response = raw_response.lower()

            # Check for inappropriate content signals
            if any(word in lower_response for word in ['inappropriate', 'unsafe', 'offensive', 'harmful']):
                structured_response["is_appropriate"] = False
                structured_response["confidence_score"] = 0.9
                # Extract reason if present
                if "because" in lower_response:
                    reason_start = lower_response.find("because")
                    structured_response["reason"] = raw_response[reason_start:].split('.')[0].strip()
            else:
                structured_response["confidence_score"] = 0.95

            return structured_response

        except Exception as e:
            self.logger.error(f"Error parsing response: {str(e)}")
            return {
                "is_appropriate": True,
                "confidence_score": 0.5,
                "reason": "Failed to parse response"
            }

    def format_response(self, raw_response: Dict) -> Dict:
        """Format the final response with parsed content and usage stats"""
        try:
            return {
                "content": self.parse_moderation_response(raw_response["content"]),
                "usage": raw_response["usage"],
                "model": raw_response["model"]
            }
        except Exception as e:
            self.logger.error(f"Error formatting response: {str(e)}")
            raise

?? ?? ? ??? ???? /stats ?????? ??????.

import redis
from typing import Optional, Any
import json
import hashlib

class CacheManager:
    def __init__(self, redis_url: str, ttl: int = 3600):
        self.redis = redis.from_url(redis_url)
        self.ttl = ttl

    def _generate_key(self, prompt: str, params: dict) -> str:
        """Generate a unique cache key"""
        cache_data = {
            'prompt': prompt,
            'params': params
        }
        serialized = json.dumps(cache_data, sort_keys=True)
        return hashlib.sha256(serialized.encode()).hexdigest()

    async def get_cached_response(self, 
                                prompt: str, 
                                params: dict) -> Optional[dict]:
        """Retrieve cached LLM response"""
        key = self._generate_key(prompt, params)
        cached = self.redis.get(key)
        return json.loads(cached) if cached else None

    async def cache_response(self, 
                           prompt: str, 
                           params: dict, 
                           response: dict) -> None:
        """Cache LLM response"""
        key = self._generate_key(prompt, params)
        self.redis.setex(
            key,
            self.ttl,
            json.dumps(response)
        )

? ??? ?? ????? ?? /moderate ?????? ?? ?? ??? ?????.

Integrating Large Language Models in Production Applications

??

? ???? ???? ???? ???????? ??? ?? ??? ???? ??? ?????. API ?????, ??, ???? ??, ?? ??? ?? ??? ??????. ??? ??? ?? ??? ?? ???? ???????.

?? ??? ??????? ??? ?? ???? ??? ? ????.

  • ??? ??? ?? ???? ??
  • ?? ??? ?? A/B ???
  • ??? ??? ?? ? ?? ?????
  • ??? ?? ?? ??
  • ?? ???? ????? ??

????? ChatGPT2 ??? ????? ? ???? ?? LLM ????? ????? ??? ? ????. ??? ??? ?? ??? ???? ?? ?? ??? ??? ??????.

??? ?? ??? ? ????? ??? ???? ??? ???? ??? ???? ?? ?????.

??? ?????! ?

? ??? ???? ??????? ??? ?? ?? ??? ?? ?????. ??? ??? PHP ??? ????? ?? ?? ??? ?????!

? ????? ??
? ?? ??? ????? ???? ??? ??????, ???? ?????? ????. ? ???? ?? ???? ?? ??? ?? ????. ???? ??? ???? ???? ??? ?? admin@php.cn?? ?????.

? AI ??

Undresser.AI Undress

Undresser.AI Undress

???? ?? ??? ??? ?? AI ?? ?

AI Clothes Remover

AI Clothes Remover

???? ?? ???? ??? AI ?????.

Video Face Swap

Video Face Swap

??? ??? AI ?? ?? ??? ???? ?? ???? ??? ?? ????!

???

??? ??

???++7.3.1

???++7.3.1

???? ?? ?? ?? ???

SublimeText3 ??? ??

SublimeText3 ??? ??

??? ??, ???? ?? ????.

???? 13.0.1 ???

???? 13.0.1 ???

??? PHP ?? ?? ??

???? CS6

???? CS6

??? ? ?? ??

SublimeText3 Mac ??

SublimeText3 Mac ??

? ??? ?? ?? ?????(SublimeText3)

???

??? ??

??? ????
1597
29
PHP ????
1488
72
???
????? API ??? ???? ?? ????? API ??? ???? ?? Jul 13, 2025 am 02:22 AM

API ??? ??? ??? ?? ??? ???? ???? ???? ????. 1. Apikey? ?? ??? ?? ????, ????? ?? ?? ?? URL ?? ??? ?????. 2. Basicauth? ?? ???? ??? Base64 ??? ??? ??? ??? ????? ?????. 3. OAUTH2? ?? Client_ID ? Client_Secret? ?? ??? ?? ?? ?? ??? BearEtroken? ???????. 4. ?? ??? ???? ?? ?? ?? ???? ????? ???? ?? ?? ? ????. ???, ??? ?? ??? ??? ???? ?? ??? ???? ???? ?? ?????.

??? ??? ??????. ??? ??? ??????. Jul 07, 2025 am 12:14 AM

Assert? ????? ???? ???? ?? ? ???? ??? ???? ??? ?? ?? ????. ??? ??? ??? ?? ??? ?????, ?? ?? ?? ??, ?? ?? ?? ?? ?? ?? ??? ????? ?? ?? ??? ?? ???? ??? ? ??? ??? ??? ??? ?? ???????. ?? ??? ???? ?? ?? ???? ?? ????? ??? ? ????.

??? ???? ?????? ??? ???? ?????? Jul 08, 2025 am 02:56 AM

inpython, iteratorsareobjectsthatlowloppingthroughcollections __ () ? __next __ ()

??? ?? ??? ?????? ??? ?? ??? ?????? Jul 07, 2025 am 02:55 AM

typehintsinpythonsolvetheproblemombiguityandpotentialbugsindynamicallytypedcodebyallowingdevelopscifyexpectiontypes. theyenhancereadability, enablearylybugdetection ? improvetoomingsupport.typehintsareaddedusingaColon (:) forvariblesAndAramete

? ?? ? ??? ???? ?? Python ? ?? ? ??? ???? ?? Python Jul 09, 2025 am 01:13 AM

????? ??? ? ??? ??? ?? ??? ???? ??? zip () ??? ???? ????.? ??? ?? ??? ???? ?? ??? ?? ????. ?? ??? ???? ?? ?? itertools.zip_longest ()? ???? ?? ?? ? ??? ?? ? ????. enumerate ()? ???? ??? ???? ?? ? ????. 1.zip ()? ???? ????? ?? ??? ??? ??? ?????. 2.zip_longest ()? ???? ?? ??? ?? ? ? ???? ?? ? ????. 3. Enumental (Zip ())? ??? ??? ????? ??? ???? ???? ?? ???? ?? ? ????.

Python Fastapi ???? Python Fastapi ???? Jul 12, 2025 am 02:42 AM

Python? ???? ????? ???? API? ???? Fastapi? ?????. ?? ??? ?? ????? ?????? ??? ??? ??? ???? ?? ? ? ????. Fastapi ? Asgi Server Uvicorn? ?? ? ? ????? ??? ??? ? ????. ??? ??, ?? ?? ?? ? ???? ?????? API? ???? ?? ? ? ????. Fastapi? ??? HTTP ??? ???? ?? ?? ? Swaggerui ? Redoc Documentation Systems? ?????. ?? ??? ?? URL ?? ??? ?? ? ??? ??, ?? ?? ??? ???? ???? ?? ?? ??? ??? ? ????. Pydantic ??? ???? ??? ?? ???? ???? ????? ? ??? ? ? ????.

????? API? ????? ?? ????? API? ????? ?? Jul 12, 2025 am 02:47 AM

API? ?????? Python? ?? ?????? ???????. ??? ?????? ????, ??? ???, ??? ????, ?? ??? ???? ? ???? ????. ?? PipinstallRequests? ?? ?????? ??????. ?? ?? requests.get () ?? requests.post () ? ?? ???? ???? ?? ?? ?? ??? ?????. ?? ?? response.status_code ? response.json ()? ???? ?? ??? ???? ????? ??????. ?????, ?? ?? ?? ??? ???? ?? ?? ??? ???? ? ?? ?????? ???? ?? ???? ???? ???? ??????.

??? ?? ?? ?? ? ?? ??? ?? ?? ?? ? ?? Jul 06, 2025 am 02:56 AM

?? ??? ?? ????? ???? ?? ? ? ??????. Python? ?? Venv ??? ???? ??? ??? Python-Mvenvenv???. ??? ?? : Windows? Env \ Scripts \ Activate? ?????. MacOS/Linux? Sourceenv/bin/activate? ?????. ?? ???? PipinStall? ???? PipFreeze> ?? ??? ???? ?? ?? ??? ???? PipinStall-Rrequirements.txt? ???? ??? ?????. ?? ???? GIT? ???? ?? ? ???? ?? ??? ? ????? IDE?? ?? ?? ? ???? ??? ? ????.

See all articles