Brokerload statement
LOAD
LABEL gaofeng_broker_load_HDD
(
DATA INFILE("hdfs://eoop/user/coue_data/hive_db/couta_test/ader_lal_offline_0813_1/*")
INTO TABLE ads_user
)
WITH BROKER "hdfs_broker"
(
"dfs.nameservices"="eadhadoop",
"dfs.ha.namenodes.eadhadoop" = "nn1,nn2",
"dfs.namenode.rpc-address.eadhadoop.nn1" = "h4:8000",
"dfs.namenode.rpc-address.eadhadoop.nn2" = "z7:8000",
"dfs.client.failover.proxy.provider.eadhadoop" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"hadoop.security.authentication" = "kerberos","kerberos_principal" = "ou3.CN",
"kerberos_keytab_content" = "BQ8uMTYzLkNPTQALY291cnNlXgAAAAFfVyLbAQABAAgCtp0qmxxP8QAAAAE="
);
report errors
Task cancelled
type:ETL_ RUN_ FAIL; msg:errCode = 2, detailMessage = Scan bytes per broker scanner exceed limit: 3221225472
Solution:
The Doris test environment consists of three be nodes, while the Fe configuration is max_bytes_per_broker_Scanner defaults to 3G, and the files to be imported are about 13gb
parameters need to be modified
Fe executes the following dynamic parameter modification command
admin set frontend config ("Max_ bytes_ per_ broker_ scanner" = "5368709120");
is modified to 5g. In this way, the maximum file size that can be imported by the cluster is 5g * 3 (be) = 15GB
execute it again
Read More:
- Doris BrokerLoad Error: No source file in this table [How to Solve]
- Doris BrokerLoad Error: quality not good enough to cancel
- Doris BrokerLoad Error: quality not good enough to cancel
- [Solved] KEIL Compile Error: Error: L6220E: Load region LR_IROM1 size (65552 bytes) exceeds limit (65536 bytes)……
- [Solved] Doris Error: Label Already Exists
- [Solved] Doris Error: too many filtered rows
- [Solved] Doris’s routineload task alarm consuming failed
- [Solved] hcitool Error: Set scan parameters failed: Operation not permitted
- Two implementation methods of spring boot scan mapper interface class
- How to Solve Rabbitmq Error: Failed to start RabbitMQ broker
- [Solved] kafka Startup Error: ERROR Shutdown broker because all log dirs in /…/kafka/logs have failed (kafka.log.LogManager)
- Kafka configurate broker mapping error: discovered coordinator XXX rack: null
- [Solved] Hadoop Mapreduce Error: GC overhead limit exceeded
- How to Solve Doris dynamic partition table routineload Error
- [Solved] Doris StreamLoad Error: load by MERGE or DELETE need to upgrade table to support batch delete
- [HBase Error]“java.lang.OutOfMemoryError: Requested array size exceeds VM limit”
- How to Solve DB2 uses Limit Error
- [Solved] kettle Error: GC overhead limit exceeded
- [Solved] TensorFlow severing Container Creat Error: failed: Out of range: Read less bytes than requested
- Hive Error: Error: GC overhead limit exceeded