Interviewer: How did you query 10 million data?
Aug 15, 2023 pm 04:34 PMRecently I have been doing mock interviews and resume optimization for everyone, and I found that many people see what tens of millions of data and the like Interview questions will make you weak.
Maybe some people have never encountered a table with tens of millions of data, and they don’t know what will happen when querying tens of millions of data.
Today I will take you through a practical operation. This time it is based on MySQL 5.7.26 for testing
Preparing data
What should I do if I don’t have 10 million data?
Can’t you create it without data?
Is it difficult to create data?
The code creates 10 million?
That's impossible, it's too slow, and it might take a whole day. You can use database scripts to execute much faster.
Create table
CREATE TABLE `user_operation_log` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `ip` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `op_data` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr1` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr2` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr3` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr4` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr5` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr6` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr7` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr8` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr9` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr10` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr11` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, `attr12` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL, PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;
Create data script
Use batch Insert, the efficiency will be much faster, and every 1000 items will be committed. If the amount of data is too large, it will also lead to slow batch insertion efficiency.
DELIMITER ;; CREATE PROCEDURE batch_insert_log() BEGIN DECLARE i INT DEFAULT 1; DECLARE userId INT DEFAULT 10000000; set @execSql = 'INSERT INTO `test`.`user_operation_log`(`user_id`, `ip`, `op_data`, `attr1`, `attr2`, `attr3`, `attr4`, `attr5`, `attr6`, `attr7`, `attr8`, `attr9`, `attr10`, `attr11`, `attr12`) VALUES'; set @execData = ''; WHILE i<=10000000 DO set @attr = "'測試很長很長很長很長很長很長很長很長很長很長很長很長很長很長很長很長很長的屬性'"; set @execData = concat(@execData, "(", userId + i, ", '10.0.69.175', '用戶登錄操作'", ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ",", @attr, ")"); if i % 1000 = 0 then set @stmtSql = concat(@execSql, @execData,";"); prepare stmt from @stmtSql; execute stmt; DEALLOCATE prepare stmt; commit; set @execData = ""; else set @execData = concat(@execData, ","); end if; SET i=i+1; END WHILE; END;; DELIMITER ;
Start testing
My computer configuration is relatively low: win10 standard pressure i5, read and write about 500MB SSD
Due to the low configuration, I only prepared for this test 3148000 pieces of data were obtained, occupying 5G of disk (without indexing), and ran for 38 minutes. Students with good computer configuration can insert multiple points of data for testing
SELECT count(1) FROM `user_operation_log`
Return result: 3148000
The three query times are:
14060 ms 13755 ms 13447 ms
普通分頁查詢
MySQL 支持 LIMIT 語句來選取指定的條數(shù)數(shù)據(jù), Oracle 可以使用 ROWNUM 來選取。
MySQL分頁查詢語法如下:
SELECT * FROM table LIMIT [offset,] rows | rows OFFSET offset
第一個(gè)參數(shù)指定第一個(gè)返回記錄行的偏移量 第二個(gè)參數(shù)指定返回記錄行的最大數(shù)目
下面我們開始測試查詢結(jié)果:
SELECT * FROM `user_operation_log` LIMIT 10000, 10
查詢3次時(shí)間分別為:
59 ms 49 ms 50 ms
這樣看起來速度還行,不過是本地?cái)?shù)據(jù)庫,速度自然快點(diǎn)。
換個(gè)角度來測試
相同偏移量,不同數(shù)據(jù)量
SELECT * FROM `user_operation_log` LIMIT 10000, 10 SELECT * FROM `user_operation_log` LIMIT 10000, 100 SELECT * FROM `user_operation_log` LIMIT 10000, 1000 SELECT * FROM `user_operation_log` LIMIT 10000, 10000 SELECT * FROM `user_operation_log` LIMIT 10000, 100000 SELECT * FROM `user_operation_log` LIMIT 10000, 1000000
查詢時(shí)間如下:
Quantity | First time | Second time | Third time |
---|---|---|---|
10 items | 53ms | 52ms | 47ms |
100 items | 50ms | 60ms | 55ms |
61ms | 74ms | 60ms | |
164ms | 180ms | 217ms | |
1609ms | 1741ms | 1764ms | |
16219ms | 16889ms | 17081ms |
偏移量 | 第一次 | 第二次 | 第三次 |
---|---|---|---|
100 | 36ms | 40ms | 36ms |
1000 | 31ms | 38ms | 32ms |
10000 | 53ms | 48ms | 51ms |
100000 | 622ms | 576ms | 627ms |
1000000 | 4891ms | 5076ms | 4856ms |
從上面結(jié)果可以得出結(jié)束:偏移量越大,花費(fèi)時(shí)間越長
SELECT * FROM `user_operation_log` LIMIT 100, 100 SELECT id, attr FROM `user_operation_log` LIMIT 100, 100
如何優(yōu)化
既然我們經(jīng)過上面一番的折騰,也得出了結(jié)論,針對上面兩個(gè)問題:偏移大、數(shù)據(jù)量大,我們分別著手優(yōu)化
優(yōu)化偏移量大問題
采用子查詢方式
我們可以先定位偏移位置的 id,然后再查詢數(shù)據(jù)
SELECT * FROM `user_operation_log` LIMIT 1000000, 10 SELECT id FROM `user_operation_log` LIMIT 1000000, 1 SELECT * FROM `user_operation_log` WHERE id >= (SELECT id FROM `user_operation_log` LIMIT 1000000, 1) LIMIT 10
查詢結(jié)果如下:
sql | 花費(fèi)時(shí)間 |
---|---|
第一條 | 4818ms |
第二條(無索引情況下) | 4329ms |
第二條(有索引情況下) | 199ms |
第三條(無索引情況下) | 4319ms |
第三條(有索引情況下) | 201ms |
從上面結(jié)果得出結(jié)論:
第一條花費(fèi)的時(shí)間最大,第三條比第一條稍微好點(diǎn) 子查詢使用索引速度更快
缺點(diǎn):只適用于id遞增的情況
id非遞增的情況可以使用以下寫法,但這種缺點(diǎn)是分頁查詢只能放在子查詢里面
注意:某些 mysql 版本不支持在 in 子句中使用 limit,所以采用了多個(gè)嵌套select
SELECT * FROM `user_operation_log` WHERE id IN (SELECT t.id FROM (SELECT id FROM `user_operation_log` LIMIT 1000000, 10) AS t)
采用 id 限定方式
這種方法要求更高些,id必須是連續(xù)遞增,而且還得計(jì)算id的范圍,然后使用 between,sql如下
SELECT * FROM `user_operation_log` WHERE id between 1000000 AND 1000100 LIMIT 100 SELECT * FROM `user_operation_log` WHERE id >= 1000000 LIMIT 100
查詢結(jié)果如下:
sql | 花費(fèi)時(shí)間 |
---|---|
第一條 | 22ms |
第二條 | 21ms |
從結(jié)果可以看出這種方式非常快
注意:這里的 LIMIT 是限制了條數(shù),沒有采用偏移量
優(yōu)化數(shù)據(jù)量大問題
返回結(jié)果的數(shù)據(jù)量也會直接影響速度
SELECT * FROM `user_operation_log` LIMIT 1, 1000000 SELECT id FROM `user_operation_log` LIMIT 1, 1000000 SELECT id, user_id, ip, op_data, attr1, attr2, attr3, attr4, attr5, attr6, attr7, attr8, attr9, attr10, attr11, attr12 FROM `user_operation_log` LIMIT 1, 1000000
查詢結(jié)果如下:
sql | 花費(fèi)時(shí)間 |
---|---|
第一條 | 15676ms |
第二條 | 7298ms |
第三條 | 15960ms |
It can be seen from the results that by reducing unnecessary columns, the query efficiency can also be significantly improved
The first and third query speeds are almost the same. At this time, you will definitely complain, then I Why write so many fields? Just * and you’re done.
Note that my MySQL server and client are on the same machine, so the query data is similar. Qualified students can test it. Test the client separately from MySQL
SELECT * Isn’t it delicious?
By the way, I would like to add why SELECT *
is banned. Isn't it delicious because it's simple and mindless?
Two main points:
Using " SELECT *
" the database needs to parse more objects, fields, permissions, attributes and other related content. When the SQL statements are complex and there are many hard parses, it will put a heavy burden on the database.Increases network overhead, *
Sometimes useless and large text fields such as log and IconMD5 are mistakenly added, and the data transmission size will increase geometrically. Especially since MySQL and the application are not on the same machine, this overhead is very obvious.
The above is the detailed content of Interviewer: How did you query 10 million data?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

The settings.json file is located in the user-level or workspace-level path and is used to customize VSCode settings. 1. User-level path: Windows is C:\Users\\AppData\Roaming\Code\User\settings.json, macOS is /Users//Library/ApplicationSupport/Code/User/settings.json, Linux is /home//.config/Code/User/settings.json; 2. Workspace-level path: .vscode/settings in the project root directory

To correctly handle JDBC transactions, you must first turn off the automatic commit mode, then perform multiple operations, and finally commit or rollback according to the results; 1. Call conn.setAutoCommit(false) to start the transaction; 2. Execute multiple SQL operations, such as INSERT and UPDATE; 3. Call conn.commit() if all operations are successful, and call conn.rollback() if an exception occurs to ensure data consistency; at the same time, try-with-resources should be used to manage resources, properly handle exceptions and close connections to avoid connection leakage; in addition, it is recommended to use connection pools and set save points to achieve partial rollback, and keep transactions as short as possible to improve performance.

DependencyInjection(DI)isadesignpatternwhereobjectsreceivedependenciesexternally,promotingloosecouplingandeasiertestingthroughconstructor,setter,orfieldinjection.2.SpringFrameworkusesannotationslike@Component,@Service,and@AutowiredwithJava-basedconfi

itertools.combinations is used to generate all non-repetitive combinations (order irrelevant) that selects a specified number of elements from the iterable object. Its usage includes: 1. Select 2 element combinations from the list, such as ('A','B'), ('A','C'), etc., to avoid repeated order; 2. Take 3 character combinations of strings, such as "abc" and "abd", which are suitable for subsequence generation; 3. Find the combinations where the sum of two numbers is equal to the target value, such as 1 5=6, simplify the double loop logic; the difference between combinations and arrangement lies in whether the order is important, combinations regard AB and BA as the same, while permutations are regarded as different;

java.lang.OutOfMemoryError: Javaheapspace indicates insufficient heap memory, and needs to check the processing of large objects, memory leaks and heap settings, and locate and optimize the code through the heap dump analysis tool; 2. Metaspace errors are common in dynamic class generation or hot deployment due to excessive class metadata, and MaxMetaspaceSize should be restricted and class loading should be optimized; 3. Unabletocreatenewnativethread due to exhausting system thread resources, it is necessary to check the number of threads, use thread pools, and adjust the stack size; 4. GCoverheadlimitexceeded means that GC is frequent but has less recycling, and GC logs should be analyzed and optimized.

fixture is a function used to provide preset environment or data for tests. 1. Use the @pytest.fixture decorator to define fixture; 2. Inject fixture in parameter form in the test function; 3. Execute setup before yield, and then teardown; 4. Control scope through scope parameters, such as function, module, etc.; 5. Place the shared fixture in conftest.py to achieve cross-file sharing, thereby improving the maintainability and reusability of tests.

TheJVMenablesJava’s"writeonce,runanywhere"capabilitybyexecutingbytecodethroughfourmaincomponents:1.TheClassLoaderSubsystemloads,links,andinitializes.classfilesusingbootstrap,extension,andapplicationclassloaders,ensuringsecureandlazyclassloa

Use classes in the java.time package to replace the old Date and Calendar classes; 2. Get the current date and time through LocalDate, LocalDateTime and LocalTime; 3. Create a specific date and time using the of() method; 4. Use the plus/minus method to immutably increase and decrease the time; 5. Use ZonedDateTime and ZoneId to process the time zone; 6. Format and parse date strings through DateTimeFormatter; 7. Use Instant to be compatible with the old date types when necessary; date processing in modern Java should give priority to using java.timeAPI, which provides clear, immutable and linear
