亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Web Front-end JS Tutorial Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

Nov 26, 2024 pm 09:26 PM

Introduction

I am currently maintaining a powerful open source creative drawing board. This drawing board integrates a lot of interesting brushes and auxiliary drawing functions, which allows users to experience a new drawing effect. Whether on mobile or PC, you can enjoy a better interactive experience and effect display.

In this article, I will explain in detail how to combine Transformers.js to achieve background removal and image marking segmentation. The result is as follows

Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

Link: https://songlh.top/paint-board/

Github: https://github.com/LHRUN/paint-board Welcome to Star ??

Transformers.js

Transformers.js is a powerful JavaScript library based on Hugging Face's Transformers that can be run directly in the browser without relying on server-side computation. This means that you can run your models locally, increasing efficiency and reducing deployment and maintenance costs.

Currently Transformers.js has provided 1000 models on Hugging Face, covering various domains, which can satisfy most of your needs, such as image processing, text generation, translation, sentiment analysis and other tasks processing, you can easily achieve through Transformers.js. Search for models as follows.

Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

The current major version of Transformers.js has been updated to V3, which adds a lot of great features, details: Transformers.js v3: WebGPU Support, New Models & Tasks, and More….

Both of the features I've added to this post use WebGpu support, which is only available in V3, and has greatly improved processing speed, with parsing now in the milliseconds. However, it should be noted that there are not many browsers that support WebGPU, so it is recommended to use the latest version of Google to visit.

Function 1: Remove background

To remove the background I use the Xenova/modnet model, which looks like this

Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

The processing logic can be divided into three steps

  1. initialise the state, and load the model and processor.
  2. the display of the interface, this is based on your own design, not on mine.
  3. Show the effect, this is based on your own design, not mine. Nowadays it is more popular to use a border line to dynamically display the contrast effect before and after removing the background.

The code logic is as follows, React TS , see my project's source code for details, the source code is located in src/components/boardOperation/uploadImage/index.tsx

import { useState, FC, useRef, useEffect, useMemo } from 'react'
import {
  env,
  AutoModel,
  AutoProcessor,
  RawImage,
  PreTrainedModel,
  Processor
} from '@huggingface/transformers'

const REMOVE_BACKGROUND_STATUS = {
  LOADING: 0,
  NO_SUPPORT_WEBGPU: 1,
  LOAD_ERROR: 2,
  LOAD_SUCCESS: 3,
  PROCESSING: 4,
  PROCESSING_SUCCESS: 5
}

type RemoveBackgroundStatusType =
  (typeof REMOVE_BACKGROUND_STATUS)[keyof typeof REMOVE_BACKGROUND_STATUS]

const UploadImage: FC<{ url: string }> = ({ url }) => {
  const [removeBackgroundStatus, setRemoveBackgroundStatus] =
    useState<RemoveBackgroundStatusType>()
  const [processedImage, setProcessedImage] = useState('')

  const modelRef = useRef<PreTrainedModel>()
  const processorRef = useRef<Processor>()

  const removeBackgroundBtnTip = useMemo(() => {
    switch (removeBackgroundStatus) {
      case REMOVE_BACKGROUND_STATUS.LOADING:
        return 'Remove background function loading'
      case REMOVE_BACKGROUND_STATUS.NO_SUPPORT_WEBGPU:
        return 'WebGPU is not supported in this browser, to use the remove background function, please use the latest version of Google Chrome'
      case REMOVE_BACKGROUND_STATUS.LOAD_ERROR:
        return 'Remove background function failed to load'
      case REMOVE_BACKGROUND_STATUS.LOAD_SUCCESS:
        return 'Remove background function loaded successfully'
      case REMOVE_BACKGROUND_STATUS.PROCESSING:
        return 'Remove Background Processing'
      case REMOVE_BACKGROUND_STATUS.PROCESSING_SUCCESS:
        return 'Remove Background Processing Success'
      default:
        return ''
    }
  }, [removeBackgroundStatus])

  useEffect(() => {
    ;(async () => {
      try {
        if (removeBackgroundStatus === REMOVE_BACKGROUND_STATUS.LOADING) {
          return
        }
        setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.LOADING)

        // Checking WebGPU Support
        if (!navigator?.gpu) {
          setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.NO_SUPPORT_WEBGPU)
          return
        }
        const model_id = 'Xenova/modnet'
        if (env.backends.onnx.wasm) {
          env.backends.onnx.wasm.proxy = false
        }

        // Load model and processor
        modelRef.current ??= await AutoModel.from_pretrained(model_id, {
          device: 'webgpu'
        })
        processorRef.current ??= await AutoProcessor.from_pretrained(model_id)
        setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.LOAD_SUCCESS)
      } catch (err) {
        console.log('err', err)
        setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.LOAD_ERROR)
      }
    })()
  }, [])

  const processImages = async () => {
    const model = modelRef.current
    const processor = processorRef.current

    if (!model || !processor) {
      return
    }

    setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.PROCESSING)

    // load image
    const img = await RawImage.fromURL(url)

    // Pre-processed image
    const { pixel_values } = await processor(img)

    // Generate image mask
    const { output } = await model({ input: pixel_values })
    const maskData = (
      await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(
        img.width,
        img.height
      )
    ).data

    // Create a new canvas
    const canvas = document.createElement('canvas')
    canvas.width = img.width
    canvas.height = img.height
    const ctx = canvas.getContext('2d') as CanvasRenderingContext2D

    // Draw the original image
    ctx.drawImage(img.toCanvas(), 0, 0)

    // Updating the mask area
    const pixelData = ctx.getImageData(0, 0, img.width, img.height)
    for (let i = 0; i < maskData.length; ++i) {
      pixelData.data[4 * i + 3] = maskData[i]
    }
    ctx.putImageData(pixelData, 0, 0)

    // Save new image
    setProcessedImage(canvas.toDataURL('image/png'))
    setRemoveBackgroundStatus(REMOVE_BACKGROUND_STATUS.PROCESSING_SUCCESS)
  }

  return (
    <div className="card shadow-xl">
      <button
        className={`btn btn-primary btn-sm ${
          ![
            REMOVE_BACKGROUND_STATUS.LOAD_SUCCESS,
            REMOVE_BACKGROUND_STATUS.PROCESSING_SUCCESS,
            undefined
          ].includes(removeBackgroundStatus)
            ? 'btn-disabled'
            : ''
        }`}
        onClick={processImages}
      >
        Remove background
      </button>
      <div className="text-xs text-base-content mt-2 flex">
        {removeBackgroundBtnTip}
      </div>
      <div className="relative mt-4 border border-base-content border-dashed rounded-lg overflow-hidden">
        <img
          className={`w-[50vw] max-w-[400px] h-[50vh] max-h-[400px] object-contain`}
          src={url}
        />
        {processedImage && (
          <img
            className={`w-full h-full absolute top-0 left-0 z-[2] object-contain`}
            src={processedImage}
          />
        )}
      </div>
    </div>
  )
}

export default UploadImage

Function 2: Image Marker Segmentation

The image marker segmentation is implemented using the Xenova/slimsam-77-uniform model. The effect is as follows, you can click on the image after it is loaded, and the segmentation is generated according to the coordinates of your click.

Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing

The processing logic can be divided into five steps

  1. initialise the state, and load the model and processor
  2. Get the image and load it, then save the image loading data and embedding data.
  3. listen to the image click event, record the click data, divided into positive markers and negative markers, after each click according to the click data decoded to generate the mask data, and then according to the mask data to draw the segmentation effect.
  4. interface display, this to your own design arbitrary play, not my prevail
  5. click to save the image, according to the mask pixel data, match the original image data, and then exported through the canvas drawing

The code logic is as follows, React TS , see my project's source code for details, the source code is located in src/components/boardOperation/uploadImage/imageSegmentation.tsx

import { useState, useRef, useEffect, useMemo, MouseEvent, FC } from 'react'
import {
  SamModel,
  AutoProcessor,
  RawImage,
  PreTrainedModel,
  Processor,
  Tensor,
  SamImageProcessorResult
} from '@huggingface/transformers'

import LoadingIcon from '@/components/icons/loading.svg?react'
import PositiveIcon from '@/components/icons/boardOperation/image-segmentation-positive.svg?react'
import NegativeIcon from '@/components/icons/boardOperation/image-segmentation-negative.svg?react'

interface MarkPoint {
  position: number[]
  label: number
}

const SEGMENTATION_STATUS = {
  LOADING: 0,
  NO_SUPPORT_WEBGPU: 1,
  LOAD_ERROR: 2,
  LOAD_SUCCESS: 3,
  PROCESSING: 4,
  PROCESSING_SUCCESS: 5
}

type SegmentationStatusType =
  (typeof SEGMENTATION_STATUS)[keyof typeof SEGMENTATION_STATUS]

const ImageSegmentation: FC<{ url: string }> = ({ url }) => {
  const [markPoints, setMarkPoints] = useState<MarkPoint[]>([])
  const [segmentationStatus, setSegmentationStatus] =
    useState<SegmentationStatusType>()
  const [pointStatus, setPointStatus] = useState<boolean>(true)

  const maskCanvasRef = useRef<HTMLCanvasElement>(null) // Segmentation mask
  const modelRef = useRef<PreTrainedModel>() // model
  const processorRef = useRef<Processor>() // processor
  const imageInputRef = useRef<RawImage>() // original image
  const imageProcessed = useRef<SamImageProcessorResult>() // Processed image
  const imageEmbeddings = useRef<Tensor>() // Embedding data

  const segmentationTip = useMemo(() => {
    switch (segmentationStatus) {
      case SEGMENTATION_STATUS.LOADING:
        return 'Image Segmentation function Loading'
      case SEGMENTATION_STATUS.NO_SUPPORT_WEBGPU:
        return 'WebGPU is not supported in this browser, to use the image segmentation function, please use the latest version of Google Chrome.'
      case SEGMENTATION_STATUS.LOAD_ERROR:
        return 'Image Segmentation function failed to load'
      case SEGMENTATION_STATUS.LOAD_SUCCESS:
        return 'Image Segmentation function loaded successfully'
      case SEGMENTATION_STATUS.PROCESSING:
        return 'Image Processing...'
      case SEGMENTATION_STATUS.PROCESSING_SUCCESS:
        return 'The image has been processed successfully, you can click on the image to mark it, the green mask area is the segmentation area.'
      default:
        return ''
    }
  }, [segmentationStatus])

  // 1. load model and processor
  useEffect(() => {
    ;(async () => {
      try {
        if (segmentationStatus === SEGMENTATION_STATUS.LOADING) {
          return
        }

        setSegmentationStatus(SEGMENTATION_STATUS.LOADING)
        if (!navigator?.gpu) {
          setSegmentationStatus(SEGMENTATION_STATUS.NO_SUPPORT_WEBGPU)
          return
        }const model_id = 'Xenova/slimsam-77-uniform'
        modelRef.current ??= await SamModel.from_pretrained(model_id, {
          dtype: 'fp16', // or "fp32"
          device: 'webgpu'
        })
        processorRef.current ??= await AutoProcessor.from_pretrained(model_id)

        setSegmentationStatus(SEGMENTATION_STATUS.LOAD_SUCCESS)
      } catch (err) {
        console.log('err', err)
        setSegmentationStatus(SEGMENTATION_STATUS.LOAD_ERROR)
      }
    })()
  }, [])

  // 2. process image
  useEffect(() => {
    ;(async () => {
      try {
        if (
          !modelRef.current ||
          !processorRef.current ||
          !url ||
          segmentationStatus === SEGMENTATION_STATUS.PROCESSING
        ) {
          return
        }
        setSegmentationStatus(SEGMENTATION_STATUS.PROCESSING)
        clearPoints()

        imageInputRef.current = await RawImage.fromURL(url)
        imageProcessed.current = await processorRef.current(
          imageInputRef.current
        )
        imageEmbeddings.current = await (
          modelRef.current as any
        ).get_image_embeddings(imageProcessed.current)

        setSegmentationStatus(SEGMENTATION_STATUS.PROCESSING_SUCCESS)
      } catch (err) {
        console.log('err', err)
      }
    })()
  }, [url, modelRef.current, processorRef.current])

  // Updating the mask effect
  function updateMaskOverlay(mask: RawImage, scores: Float32Array) {
    const maskCanvas = maskCanvasRef.current
    if (!maskCanvas) {
      return
    }
    const maskContext = maskCanvas.getContext('2d') as CanvasRenderingContext2D

    // Update canvas dimensions (if different)
    if (maskCanvas.width !== mask.width || maskCanvas.height !== mask.height) {
      maskCanvas.width = mask.width
      maskCanvas.height = mask.height
    }

    // Allocate buffer for pixel data
    const imageData = maskContext.createImageData(
      maskCanvas.width,
      maskCanvas.height
    )

    // Select best mask
    const numMasks = scores.length // 3
    let bestIndex = 0
    for (let i = 1; i < numMasks;   i) {
      if (scores[i] > scores[bestIndex]) {
        bestIndex = i
      }
    }

    // Fill mask with colour
    const pixelData = imageData.data
    for (let i = 0; i < pixelData.length;   i) {
      if (mask.data[numMasks * i   bestIndex] === 1) {
        const offset = 4 * i
        pixelData[offset] = 101 // r
        pixelData[offset   1] = 204 // g
        pixelData[offset   2] = 138 // b
        pixelData[offset   3] = 255 // a
      }
    }

    // Draw image data to context
    maskContext.putImageData(imageData, 0, 0)
  }

  // 3. Decoding based on click data
  const decode = async (markPoints: MarkPoint[]) => {
    if (
      !modelRef.current ||
      !imageEmbeddings.current ||
      !processorRef.current ||
      !imageProcessed.current
    ) {
      return
    }// No click on the data directly clears the segmentation effect
    if (!markPoints.length && maskCanvasRef.current) {
      const maskContext = maskCanvasRef.current.getContext(
        '2d'
      ) as CanvasRenderingContext2D
      maskContext.clearRect(
        0,
        0,
        maskCanvasRef.current.width,
        maskCanvasRef.current.height
      )
      return
    }

    // Prepare inputs for decoding
    const reshaped = imageProcessed.current.reshaped_input_sizes[0]
    const points = markPoints
      .map((x) => [x.position[0] * reshaped[1], x.position[1] * reshaped[0]])
      .flat(Infinity)
    const labels = markPoints.map((x) => BigInt(x.label)).flat(Infinity)

    const num_points = markPoints.length
    const input_points = new Tensor('float32', points, [1, 1, num_points, 2])
    const input_labels = new Tensor('int64', labels, [1, 1, num_points])

    // Generate the mask
    const { pred_masks, iou_scores } = await modelRef.current({
      ...imageEmbeddings.current,
      input_points,
      input_labels
    })

    // Post-process the mask
    const masks = await (processorRef.current as any).post_process_masks(
      pred_masks,
      imageProcessed.current.original_sizes,
      imageProcessed.current.reshaped_input_sizes
    )

    updateMaskOverlay(RawImage.fromTensor(masks[0][0]), iou_scores.data)
  }

  const clamp = (x: number, min = 0, max = 1) => {
    return Math.max(Math.min(x, max), min)
  }

  const clickImage = (e: MouseEvent) => {
    if (segmentationStatus !== SEGMENTATION_STATUS.PROCESSING_SUCCESS) {
      return
    }

    const { clientX, clientY, currentTarget } = e
    const { left, top } = currentTarget.getBoundingClientRect()

    const x = clamp(
      (clientX - left   currentTarget.scrollLeft) / currentTarget.scrollWidth
    )
    const y = clamp(
      (clientY - top   currentTarget.scrollTop) / currentTarget.scrollHeight
    )

    const existingPointIndex = markPoints.findIndex(
      (point) =>
        Math.abs(point.position[0] - x) < 0.01 &&
        Math.abs(point.position[1] - y) < 0.01 &&
        point.label === (pointStatus ? 1 : 0)
    )

    const newPoints = [...markPoints]
    if (existingPointIndex !== -1) {
      // If there is a marker in the currently clicked area, it is deleted.
      newPoints.splice(existingPointIndex, 1)
    } else {
      newPoints.push({
        position: [x, y],
        label: pointStatus ? 1 : 0
      })
    }

    setMarkPoints(newPoints)
    decode(newPoints)
  }

  const clearPoints = () => {
    setMarkPoints([])
    decode([])
  }

  return (
    <div className="card shadow-xl overflow-auto">
      <div className="flex items-center gap-x-3">
        <button className="btn btn-primary btn-sm" onClick={clearPoints}>
          Clear Points
        </button>

        <button
          className="btn btn-primary btn-sm"
          onClick={() => setPointStatus(true)}
        >
          {pointStatus ? 'Positive' : 'Negative'}
        </button>
      </div>
      <div className="text-xs text-base-content mt-2">{segmentationTip}</div>
      <div
       >



<h2>
  
  
  Conclusion
</h2>

<p>Thank you for reading. This is the whole content of this article, I hope this article is helpful to you, welcome to like and favourite. If you have any questions, please feel free to discuss in the comment section!</p>


          

            
        

The above is the detailed content of Exploring the Canvas Series: combined with Transformers.js to achieve intelligent image processing. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1488
72
How does garbage collection work in JavaScript? How does garbage collection work in JavaScript? Jul 04, 2025 am 12:42 AM

JavaScript's garbage collection mechanism automatically manages memory through a tag-clearing algorithm to reduce the risk of memory leakage. The engine traverses and marks the active object from the root object, and unmarked is treated as garbage and cleared. For example, when the object is no longer referenced (such as setting the variable to null), it will be released in the next round of recycling. Common causes of memory leaks include: ① Uncleared timers or event listeners; ② References to external variables in closures; ③ Global variables continue to hold a large amount of data. The V8 engine optimizes recycling efficiency through strategies such as generational recycling, incremental marking, parallel/concurrent recycling, and reduces the main thread blocking time. During development, unnecessary global references should be avoided and object associations should be promptly decorated to improve performance and stability.

How to make an HTTP request in Node.js? How to make an HTTP request in Node.js? Jul 13, 2025 am 02:18 AM

There are three common ways to initiate HTTP requests in Node.js: use built-in modules, axios, and node-fetch. 1. Use the built-in http/https module without dependencies, which is suitable for basic scenarios, but requires manual processing of data stitching and error monitoring, such as using https.get() to obtain data or send POST requests through .write(); 2.axios is a third-party library based on Promise. It has concise syntax and powerful functions, supports async/await, automatic JSON conversion, interceptor, etc. It is recommended to simplify asynchronous request operations; 3.node-fetch provides a style similar to browser fetch, based on Promise and simple syntax

JavaScript Data Types: Primitive vs Reference JavaScript Data Types: Primitive vs Reference Jul 13, 2025 am 02:43 AM

JavaScript data types are divided into primitive types and reference types. Primitive types include string, number, boolean, null, undefined, and symbol. The values are immutable and copies are copied when assigning values, so they do not affect each other; reference types such as objects, arrays and functions store memory addresses, and variables pointing to the same object will affect each other. Typeof and instanceof can be used to determine types, but pay attention to the historical issues of typeofnull. Understanding these two types of differences can help write more stable and reliable code.

React vs Angular vs Vue: which js framework is best? React vs Angular vs Vue: which js framework is best? Jul 05, 2025 am 02:24 AM

Which JavaScript framework is the best choice? The answer is to choose the most suitable one according to your needs. 1.React is flexible and free, suitable for medium and large projects that require high customization and team architecture capabilities; 2. Angular provides complete solutions, suitable for enterprise-level applications and long-term maintenance; 3. Vue is easy to use, suitable for small and medium-sized projects or rapid development. In addition, whether there is an existing technology stack, team size, project life cycle and whether SSR is needed are also important factors in choosing a framework. In short, there is no absolutely the best framework, the best choice is the one that suits your needs.

JavaScript time object, someone builds an eactexe, faster website on Google Chrome, etc. JavaScript time object, someone builds an eactexe, faster website on Google Chrome, etc. Jul 08, 2025 pm 02:27 PM

Hello, JavaScript developers! Welcome to this week's JavaScript news! This week we will focus on: Oracle's trademark dispute with Deno, new JavaScript time objects are supported by browsers, Google Chrome updates, and some powerful developer tools. Let's get started! Oracle's trademark dispute with Deno Oracle's attempt to register a "JavaScript" trademark has caused controversy. Ryan Dahl, the creator of Node.js and Deno, has filed a petition to cancel the trademark, and he believes that JavaScript is an open standard and should not be used by Oracle

Understanding Immediately Invoked Function Expressions (IIFE) in JavaScript Understanding Immediately Invoked Function Expressions (IIFE) in JavaScript Jul 04, 2025 am 02:42 AM

IIFE (ImmediatelyInvokedFunctionExpression) is a function expression executed immediately after definition, used to isolate variables and avoid contaminating global scope. It is called by wrapping the function in parentheses to make it an expression and a pair of brackets immediately followed by it, such as (function(){/code/})();. Its core uses include: 1. Avoid variable conflicts and prevent duplication of naming between multiple scripts; 2. Create a private scope to make the internal variables invisible; 3. Modular code to facilitate initialization without exposing too many variables. Common writing methods include versions passed with parameters and versions of ES6 arrow function, but note that expressions and ties must be used.

Handling Promises: Chaining, Error Handling, and Promise Combinators in JavaScript Handling Promises: Chaining, Error Handling, and Promise Combinators in JavaScript Jul 08, 2025 am 02:40 AM

Promise is the core mechanism for handling asynchronous operations in JavaScript. Understanding chain calls, error handling and combiners is the key to mastering their applications. 1. The chain call returns a new Promise through .then() to realize asynchronous process concatenation. Each .then() receives the previous result and can return a value or a Promise; 2. Error handling should use .catch() to catch exceptions to avoid silent failures, and can return the default value in catch to continue the process; 3. Combinators such as Promise.all() (successfully successful only after all success), Promise.race() (the first completion is returned) and Promise.allSettled() (waiting for all completions)

What is the cache API and how is it used with Service Workers? What is the cache API and how is it used with Service Workers? Jul 08, 2025 am 02:43 AM

CacheAPI is a tool provided by the browser to cache network requests, which is often used in conjunction with ServiceWorker to improve website performance and offline experience. 1. It allows developers to manually store resources such as scripts, style sheets, pictures, etc.; 2. It can match cache responses according to requests; 3. It supports deleting specific caches or clearing the entire cache; 4. It can implement cache priority or network priority strategies through ServiceWorker listening to fetch events; 5. It is often used for offline support, speed up repeated access speed, preloading key resources and background update content; 6. When using it, you need to pay attention to cache version control, storage restrictions and the difference from HTTP caching mechanism.

See all articles