亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Technology peripherals AI CVPR 2024 | Four-dimensional space-time pre-training of autonomous driving world model

CVPR 2024 | Four-dimensional space-time pre-training of autonomous driving world model

Aug 07, 2024 pm 07:01 PM
theory

Peking University and the EVLO innovation team jointly proposed DriveWorld, a four-dimensional space-time pre-training algorithm for autonomous driving. This method uses a world model for pre-training, designs a memory state space model for four-dimensional spatio-temporal modeling, and reduces the random uncertainty and knowledge uncertainty faced by autonomous driving by predicting the occupation grid of the scene. This paper has been accepted by CVPR 2024.

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

Paper title: DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving

Paper link: http://ipnx.cn/link/293643def1ba1161bcdcfbfe434ab76d

1. Motivation

The scene understanding task of autonomous driving involves multiple levels such as perception of the scene and prediction of future changes. These levels include not only the three-dimensional structure in space, but also dynamic changes in the time dimension. This complex scene understanding requires the model to be able to capture and understand the intrinsic correlation of four-dimensional space and time to make accurate decisions. Learning four-dimensional spatiotemporal representations is extremely challenging due to the stochastic nature of natural scenes, local observability of the environment, and the diversity of various downstream tasks. Pre-training plays a key role in obtaining universal representations from large amounts of data, enabling the construction of a base model with universal knowledge. However, there are still relatively few pre-training studies on four-dimensional space-time in autonomous driving.

The design and implementation of autonomous driving systems need to face and deal with various uncertainties, which are mainly divided into two categories: Aleatoric uncertainty and Epistemic uncertainty. Aleatoric uncertainty arises from the inherent randomness of the world, such as the sudden movement of pedestrians or the unexpected behavior of vehicles. Epistemic uncertainty arises from incomplete knowledge of the environment, such as lack of information due to occlusion or sensor limitations. To effectively deal with these uncertainties, autonomous driving systems must be able to use past experience to predict possible future states and make inferences about unseen areas. This work addresses this challenge through a four-dimensional spatiotemporal pre-trained world model, aiming to improve the performance of autonomous driving systems in perception, prediction, and planning tasks.

2. Method

For the sequence of T video frames o1:T observed by the autonomous driving surround camera system, as well as their corresponding expert behaviors a1:T and the three-dimensional occupancy grid label y1:T, where the three-dimensional Occupancy raster labels can be obtained using 3D LiDAR point cloud and attitude data. We aim to learn a compact BEV representation from a world model that predicts current and future 3D occupancy grids from past multi-view images and actions.

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

2.1 Time series probability model

In order to give the model the ability to model four-dimensional space and time, we first introduce two potential variables (h1:T, s1:T), where ht represents the historical information variable, including All historical information at time step t, st represents the random state variable, which is the key to the model predicting the future state. ht is updated through historical information h1:t?1 and random state s1:t?1. In order to predict the future state, we follow the Recurrent State-Space Model (RSSM) and construct the posterior state distribution q(st∣o≤t,a

Considering that the dimensionality of BEV features is high, we convert it into a one-dimensional vector xt, and then sample a Gaussian distribution from (ht,at?1,xt) to generate the posterior state distribution:
p(st∣ht? 1,st?1)∽N(μθ(ht,a^t?1),σθ(ht,a^t?1)I),
where st is parameterized as a normal distribution with diagonal covariance , the initial distribution is set to s1∽N(0,I). (μ?,σ?) is a multilayer perceptron with parameterized posterior state distribution.

In the absence of observed images, the model derives the prior state distribution based on historical information and predicted actions:
p(st∣ht?1,st?1)∽N(μθ(ht,a^t? 1),σθ(ht,a^t?1)I),
where (μθ,σθ) parameterizes the prior state distribution. ?? is a policy network used to predict action a^t?1, based on historical information ht?1 and random state st?1.

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

2.1.1 Dynamic messaging

????? ?? ????? ??? ???? ???? ?? ?? ??? ???? ???? ? ?????. ??? ?? ??? ???? ?? ??? ?? ??? ???? ?? ?? ??? ???? ?? ?? ????? ???? ??? ??? ???? ?? ?????. ?? ?? ??? ???(MLN)? ?????. ?? ???? ?? v? ?? ?? ?? Δt? ?????. (v,Δt)? ????? ? ?? ?? ???(ξ1,ξ2)? ?? ?? ?? γ ? β? ?????. γ=ξ1(v,Δt),β=ξ2(v,Δt). ?? ?? st=γ?LN(st)+β? ???? ?? ???? ?? ?? ??? ?? ?? ?? ??? ?????. ??? ???? ?? ???? ?? ?? ht? ?? ??? ????? h1:t? ??? ? ????. ?? ??? ??? ???? ?? ?? ???? ??? ?????? ???? ?? ?? ht? ?? ? ????.
???? ?? ??? ht+1=fθ(ht,st)???.

2.1.2 ???? ??

????? ?? ???? ?? ?? ?? ??? ?? ?? ??? ??? ?????. ???? ?? ???? ?? ??? ??? ???? ??? ?? ??? ??, ??, ?? ??? ? ??? ??? ???? ??? ?? ??? ? ??? ??? ? ?? ???? ?? ???? ???? ?? ?????. 1?? ??? ?? ?? ?? ?? ??? ?????. ??? 1?? T ????? ???? ??? o'? ???? BEV ?? b'? ???? ?? ?? ??? ???? ?? ?? ?? b^=zθ(b')? ?????. ??? ????? ???? ?? ?? b^? ???? ???? ?? ?? st? ???? ?? ??? ?? ???? ??? ????.

2.2 ?? ?? ?? ??

?? ??? ???? ?? ??? ?? ???? ??? ?????. ??? ?? ?? ??? ???? ?? ??? ??? 3?? ?? ??? ??? ???? ?? ?????. 3?? ?? ??? ???? y^t=lθ(mθ(h~t,st),b^)? ?????. ??? mθ? 1?? ??? BEV ???? ???? ?????? lθ? ?? ??? 3D ???? ????? ?????. ??? 4?? ?? ??? ?? ??? ??? ?? ??? ??? ? ?? ?? ??? ??? ?? ??? ?? ??? ???? ?? ?? ???? ?? ?? ???? ??? ?? ??? ?????.

2.3 ?? ???? ????

4?? ??? ??? ?? ??? ?? ??? ?? ?? ??? ?? ??? ? ???, ??? ????? ??? ?? ?? ??? ??? ???. ? ??? ???? ?? ?? ?? ??? ? ?? ??? ??? ?? ???? ??? ??? ?? ?? ???? ??? ?? ?? ?? ??? ???? ??? ??? ?? ?? ??? ???? "?? ??" ????? ???????. ??. ?? ?? ?? ?? ???? ?? ??? ???? ??? ?? ?? gψ(?) (?: BERT, CLIP)? ???? ??? ?? ??? ?????. ?? ??, 3?? ?? ??? ??? ??? ?? ?? ????? ?? ??? ? ??? ??? "??? ?? ??? 3?? ?? ???? ???? ????."? ?????. ???? ptext? gψ(?)? ???? ???? ??? gψ(ptext)? ????. ?? ?? qψ(gψ(ptext))? ???? BEV ???? ???? ??? ??? ??? ?????.

2.4 ?? ?? ?? ??

DriveWorld? ?? ?? ???? ?? ?? ??? ?? ?? ??(?: Kullback-Leibler(KL) ??) ?? ??? ????? ??? ?? ?? ?? ??? ????? ?? ?????. ??? 3?? ?? ???(?, ?? ???? ??(CE)) ? ??(?, L1 ??)? ??? ?????. ??? T ?? ??? ?? ??? ???? ?? ??? ??? ?? ??? 3?? ?? ???? L ??? ??? ?????.

3. ??

3.1 ?? ??

???? ??? ??? ?? NuScene? OpenScene? ?? ???? NuScene? ?? ??????. ??? ??? ?? 3D ?? ??? ??? ?? ?? ?? ??? LiDAR ??? ???? ??? ?????.

3.2 ?? ??

??? ??? ??? ?????. ? ?? ??? ??? ?????.

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

CVPR 2024 | 自動駕駛世界模型四維時空預訓練

4. ??

DriveWorld? ?? ??? ???? ? 4?? ??? ?? ??? ?? ???? ???? ?? ??? ?? ??? ?? ??? ?????, ????? ???? ????? ?????. DriveWorld? ?? ?? ?? ??? ?? ?? ??? ?? ??? ?? ?? ?? ??? ?? ?? ?? ?? ??? ???? ??? ???? ?? ??? ?? ?? ??? ??????. ??? ???? ???? ?? ????? ?? DriveWorld? ??? ?? ?? ?? ??? ?? ??? ????? ??? ? ??? ?? ?? ???? ????? ???? ??? ?? ?? ???? ??? ??? ?????.

????

[1]Chen Min ? 3D ?? ???? ?? ?? ??? ?? ?? ??[J] IEEE Robotics and Automation Letters, 2024.

[2]Chen Min ?. Occupancy-mae: ???? ?? ?? ???? ??? ?? ?? ?? ?? ??? ??? ??? ????[J]. IEEE Transactions on Intelligent Vehicles, 2023.

EVOL ?? ? ??

Zhao Jian, China Telecom Artificial ?? ???? ????? ?? ?? ???(EVOL Lab)? ?? ????? ?? ???, ????? ?????? ??? ??? ?? ? ?? ???? ????? ?? ?? ???? ???? ?? ???? ?? ??? ??????. ?? ???? ????? ??, ?? ??, ??? ??? ?????.

1?? T-PAMI×2(IF: 24.314) ? IJCV×3(IF: 13.369)? ???? ? 60? ??? CCF-A ??? ???????. ?? ???? 5?? ?? ?? ??? ??????. Baidu, Ant Financial, Qihoo 360 ? ?? ??? 6? ?? ??? ?? ?? ??? ???? ??? ??? ??????. ?? ?? ?? ?? ?? ? ??? ?? ?? ??? "?? ?? ?? ????"? ?????? ?? ?? ??? ?? ??? ??? 6?? ????? ??????. Wu Wenjun ???? ?? ???(2023), Wu Wenjun ???? ????? 1?(2022.2/5), ???? ?? ?? ? ?? ?? ??(PREMIA) Lee Hwee Kuan ?? ?????, ACM ????? ???(? ?? ??, 1/208, CCF-A ????, 2018)? ??? ??? ????, ??? ?? ?? ?? ???? 7?? ??? ??????.

Beijing Image and Graphics Society? ??, ????? ??? ?? "Artificial Intelligence Advances" ? "IET Computer Vision"? ????, "Pattern Recognition Letters" ? "Electronics" ???? ?? ??? ?? ", VALSE ?? ?? ?? ? ACM ????? 2021 ???. ?? ??, CICAI 2022/2023 ?? ??, CCBR 2024 ?? ??, ?? ?? ?? ??/?? ??? ? ??? ?? ?? ??, "??? ????" ?' ??? ???? ?? ??, ?? ???? ?? ????? ?? ?

GitHub ????: https://zhaoj9014.github.io

?? ????: http://ipnx.cn/link/2e36742b377be90ffbf553692153d9a1

Jin Lei ?????????? ?? ??? ???, ?? ?? ???? ??? ??, ??? ??? ? ?? ??? ????, ?? ?? ??, ?? ?? ??, ?? ?? ? ?? ?? ??? ?? ?? ??? ?? ??? ??? ?? ?? ??? ???? ? ??? ???????. CVPR, AAAI, NIPS, ACMMM ??? ? 40? ??? SCI/EI ?? ??? ??????, ???? JCR Area 1? ?1??? ??? ??? ??? 11?? ?? ??? ????. of Sciences(IEEE Transactions on Multimedia), CCF-A ???? CVPR, ACMMM ??, JCR Area 2 of the Chinese Academy of Sciences(Sensors), IEEE ?? ??) ?? ? ???????? ???? ??, ???? R&D?? 2?, ?????? ???? 4? ??. ??? ICCV2021/CVPR2023 ???(Anti-UAV ??? ? ???)? ???? ?? ??? ????? ?? ? ??? ????. ???? ?? ?? ??? ??? ?? ? ?? "3? ??" ??(??? ?? ?? ???? ???? ???? A ??)?? 1??? ?? ? ??? ?????.

Min Cheng, ?? ??? ??? ??? ??, ?? ?? ???? ??? ?? ???? ?? ?? ?? ?? ?? ?? ??? ?? ??, ???? ?? ? 3?????. ?? ??? ?? ??? CVPR, ICCV, ICRA ? RAL? ?? ??? ???? ? ??? ??????, CCF-A ???? CVPR? ?1????, ?? ???? ???? ICRA, ?? ?? ???? ?? RAL ?? ????. . ??? ?? ?? R&D ????? ??????.

The above is the detailed content of CVPR 2024 | Four-dimensional space-time pre-training of autonomous driving world model. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1488
72
Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Breaking through the boundaries of traditional defect detection, 'Defect Spectrum' achieves ultra-high-precision and rich semantic industrial defect detection for the first time. Jul 26, 2024 pm 05:38 PM

In modern manufacturing, accurate defect detection is not only the key to ensuring product quality, but also the core of improving production efficiency. However, existing defect detection datasets often lack the accuracy and semantic richness required for practical applications, resulting in models unable to identify specific defect categories or locations. In order to solve this problem, a top research team composed of Hong Kong University of Science and Technology Guangzhou and Simou Technology innovatively developed the "DefectSpectrum" data set, which provides detailed and semantically rich large-scale annotation of industrial defects. As shown in Table 1, compared with other industrial data sets, the "DefectSpectrum" data set provides the most defect annotations (5438 defect samples) and the most detailed defect classification (125 defect categories

NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K NVIDIA dialogue model ChatQA has evolved to version 2.0, with the context length mentioned at 128K Jul 26, 2024 am 08:40 AM

The open LLM community is an era when a hundred flowers bloom and compete. You can see Llama-3-70B-Instruct, QWen2-72B-Instruct, Nemotron-4-340B-Instruct, Mixtral-8x22BInstruct-v0.1 and many other excellent performers. Model. However, compared with proprietary large models represented by GPT-4-Turbo, open models still have significant gaps in many fields. In addition to general models, some open models that specialize in key areas have been developed, such as DeepSeek-Coder-V2 for programming and mathematics, and InternVL for visual-language tasks.

Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Training with millions of crystal data to solve the crystallographic phase problem, the deep learning method PhAI is published in Science Aug 08, 2024 pm 09:22 PM

Editor |KX To this day, the structural detail and precision determined by crystallography, from simple metals to large membrane proteins, are unmatched by any other method. However, the biggest challenge, the so-called phase problem, remains retrieving phase information from experimentally determined amplitudes. Researchers at the University of Copenhagen in Denmark have developed a deep learning method called PhAI to solve crystal phase problems. A deep learning neural network trained using millions of artificial crystal structures and their corresponding synthetic diffraction data can generate accurate electron density maps. The study shows that this deep learning-based ab initio structural solution method can solve the phase problem at a resolution of only 2 Angstroms, which is equivalent to only 10% to 20% of the data available at atomic resolution, while traditional ab initio Calculation

Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Google AI won the IMO Mathematical Olympiad silver medal, the mathematical reasoning model AlphaProof was launched, and reinforcement learning is so back Jul 26, 2024 pm 02:40 PM

For AI, Mathematical Olympiad is no longer a problem. On Thursday, Google DeepMind's artificial intelligence completed a feat: using AI to solve the real question of this year's International Mathematical Olympiad IMO, and it was just one step away from winning the gold medal. The IMO competition that just ended last week had six questions involving algebra, combinatorics, geometry and number theory. The hybrid AI system proposed by Google got four questions right and scored 28 points, reaching the silver medal level. Earlier this month, UCLA tenured professor Terence Tao had just promoted the AI ??Mathematical Olympiad (AIMO Progress Award) with a million-dollar prize. Unexpectedly, the level of AI problem solving had improved to this level before July. Do the questions simultaneously on IMO. The most difficult thing to do correctly is IMO, which has the longest history, the largest scale, and the most negative

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

PRO | Why are large models based on MoE more worthy of attention? PRO | Why are large models based on MoE more worthy of attention? Aug 07, 2024 pm 07:08 PM

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will the situation of Transformer as the mainstream architecture of AI large models be shaken? Why has exploring large models based on MoE (Mixed of Experts) architecture become a new trend in the industry? Can Large Vision Models (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from Week50 2023

The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal The accuracy rate reaches 60.8%. Zhejiang University's chemical retrosynthesis prediction model based on Transformer was published in the Nature sub-journal Aug 06, 2024 pm 07:34 PM

Editor | KX Retrosynthesis is a critical task in drug discovery and organic synthesis, and AI is increasingly used to speed up the process. Existing AI methods have unsatisfactory performance and limited diversity. In practice, chemical reactions often cause local molecular changes, with considerable overlap between reactants and products. Inspired by this, Hou Tingjun's team at Zhejiang University proposed to redefine single-step retrosynthetic prediction as a molecular string editing task, iteratively refining the target molecular string to generate precursor compounds. And an editing-based retrosynthetic model EditRetro is proposed, which can achieve high-quality and diverse predictions. Extensive experiments show that the model achieves excellent performance on the standard benchmark data set USPTO-50 K, with a top-1 accuracy of 60.8%.

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

See all articles