Tag Archives: database

Flask Database Migration Error: ERROR [flask_migrate] Error: Can‘t locate revision identified by ‘a1c25fe0fc0e‘

Problem Description:

In flash web development, we will use the flash migrate library to migrate the database, so as to submit the changed database model we wrote in the program script to the database without deleting and rebuilding the database model
if we use Python manage.py DB init to create a migration warehouse, and then use the migrate or upgrade in flash migrate, the following two instructions:

python manage.py db migrate
python manage.py db upgrade

Error [flag_migrate] error: can’t locate revision identified by ‘a1c25fe0fc0e’ may appear. The identification number of ‘a1c25fe0fc0e’ corresponds to different database models! As shown in the figure:

resolvent:

The reason for the above error is that flash migrate cannot find the revision of “a1c25fe0fc0e” logo. We just need to indicate the missing logo number in the command
we can use the following commands in order in the shell command line window:

python app.py db revision --rev-id <Fill the prompt's identification number into this location, such as a1c25fe0fc0e above>
python app.py db migrate
python app.py db upgrade

Enter the following command to demonstrate:

then, the database migration succeeds

Finally, if there are deficiencies in the article, criticism and correction are welcome!

[Solved] Pyodbc.ProgrammingError: No results. Previous SQL was not a query.

Call the stored procedure on the remote sqlserver server with Python. Code fragment:


    conn = pyodbc.connect(SERVER=host, UID=user, PWD=password, DATABASE=dbname,
                            DRIVER=driver)
    cur = conn.cursor()
    if not cur:
        raise (NameError, 'Database connection error)
    else:
        cur.execute("EXEC GetLastData")
        resList = list()
        resList = cur.fetchall()

Execution error:

pyodbc.ProgrammingError: No results.  Previous SQL was not a query.

After checking, the stored procedure can be executed normally in the sqlserver environment. It seems that there is a problem when calling pyodbc. A similar problem is found on stackoverflow. The answer is as follows:
the problem was solved by adding set NOCOUNT on; to the beginning of the anonymous code block. That statement suppresses the record count values generated by DML statements like UPDATE … and allows the result set to be retrieved directly.

The problem is solved by adding it to the beginning of the anonymous code block. This statement suppresses the record count value generated by the DML statement and allows the set result to be retrieved directly. SET NOCOUNT ON; UPDATE …

So add a sentence set NOCOUNT on to the stored procedure

CREATE proc [dbo].[GetLastData]
AS
BEGIN

SET NOCOUNT ON

declare @begindate datetime,@enddate datetime
select @begindate=CONVERT(varchar(7),GETDATE(),120)+'-01'
select @enddate=DATEADD(MONTH,1,@begindate)

JAVA Connect Redis Error: stop-writes-on-bgsave-error option

(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.

Go to the redis installation directory and find src

./redis-cli -a redis password

Then execute

127.0.0.1:6379> config set stop-writes-on-bgsave-error no

That’s it

However, this method will fail after restarting

nested exception is org.apache.ibatis.builder.BuilderException: Error evaluating expression [Solved]

This exception is usually a problem with dynamic SQL. Find the corresponding SQL and check the dynamic SQL syntax according to the following prompt information.

Problem description

Exception information:
needed exception is org.apache.ibatis.builder.builderexception: error evaluating expression 'ides'. Return value (806) was not Iterable.

According to the exception prompt information, find the dynamic SQL statement where ides is located.

<foreach  collection="ides"  index="index" item="ides" open="(" separator="," close=")">
     #{ides}
</foreach>
...
<foreach  collection="ides"  index="index" item="ides" open="(" separator="," close=")">
     #{ides}
</foreach>

Finally, it is found that
two <foreach></ foreach> Statement operates on the same item variable, resulting in the failure of dynamic SQL splicing of the latter.

Solution:

Change the item property in any statement to a different value.

<foreach  collection="ides"  index="index" item="idess" open="(" separator="," close=")">
     #{idess}
</foreach>
...
<foreach  collection="ides"  index="index" item="ides" open="(" separator="," close=")">
     #{ides}
</foreach>

ERR Slot 3300 is already busy (Redis::CommandError)

Can I set the above configuration?(type 'yes' to accept): yes
/usr/local/share/gems/gems/redis-3.0.0/lib/redis/client.rb:79:in `call': ERR Slot 3300 is already busy (Redis::CommandError)
        from /usr/local/share/gems/gems/redis-3.0.0/lib/redis.rb:2190:in `block in method_missing'
        from /usr/local/share/gems/gems/redis-3.0.0/lib/redis.rb:36:in `block in synchronize'
        from /usr/share/ruby/monitor.rb:211:in `mon_synchronize'
        from /usr/local/share/gems/gems/redis-3.0.0/lib/redis.rb:36:in `synchronize'
        from /usr/local/share/gems/gems/redis-3.0.0/lib/redis.rb:2189:in `method_missing'
        from ./redis-trib.rb:205:in `flush_node_config'
        from ./redis-trib.rb:657:in `block in flush_nodes_config'
        from ./redis-trib.rb:656:in `each'
        from ./redis-trib.rb:656:in `flush_nodes_config'
        from ./redis-trib.rb:997:in `create_cluster_cmd'
        from ./redis-trib.rb:1373:in `<main>'

It shows that the 3300 slot is occupied by multiple nodes. The landlord has solved it by using the following methods. Open each server, execute the flush hall, flush dB and cluster reset instructions, and then re create it successfully

Nebula queries data and reports an error. Storage error: part: XX error: e_ RPC_ FAILURE(-3).

Storage Error: part: xx   error: E_ RPC_ FAILURE(-3).

IndexScanExecutor failed, error E_ RPC_ FAILURE, part xx

The processing may have timed out due to the large amount of data. You can add storage in the configuration file of graphd_ client_ timeout_ ms, storage_ client_ timeout_ MS defaults to 60 seconds and you can increase it.

Modify nebula-graphid.conf to change the timeout to:

–storage_ client_ timeout_ ms=600000

Match execution failed storage error – usage problem – nebulagraph

[Solved] ERROR 1396 (HY000): Operation ALTER USER failed for ‘root‘@‘localhost‘

MySQL Connect database error:
1251 client does not support authentication protocol requested by server; consider upgrading Mysql client ERROR 1396 (HY000): Operation ALTER USER failed for ‘root’@’localhost’

Pre-registered mysql

mysql -u root -p

Input password

mysql> use mysql;
mysql> select user,host from user;

Note that my root and host are ‘%’
you may execute:

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root';

Change to:

ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'root';

Operation record:

mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root';
ERROR 1396 (HY000): Operation ALTER USER failed for 'root'@'localhost'
mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select user,host from user;
+------------------+-----------+
| user             | host      |
+------------------+-----------+
| root             | %         |
| mysql.infoschema | localhost |
| mysql.session    | localhost |
| mysql.sys        | localhost |
+------------------+-----------+
4 rows in set (0.00 sec)

mysql> ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Bye

Mysql ERROR 1067: Invalid default value for ‘date’ [How to Solve]

When adding fields to a table, I suddenly find that the default value of a field of date type is wrong, which is depressing~

After troubleshooting, it turns out that there is a problem with MySQL configuration. Under Wamp, SQL is not set in MySQL 5.7_ Mode.

1. Find [mysqld] in my.ini file

2. If there is no SQL_Mode, add it and modify it if necessary

sql_mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
or
sql_mode=ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

3.Restart MySQL;
use the following commands to operate mysql:
systemctl restart mysqld.service
systemctl start mysqld.service
systemctl stop mysqld.service

MySQL server has gone away Error [How to Solve]

Reference: solution to MySQL server has gone away error – time blog
When we use Mysql to import large file SQL, we may report the MySQL server has gone away error. The problem is the default value setting of  max_allowed_packet configuration is too small. You only need to increase the value of this item and import it again to succeed. This item is used to limit the size of the package received by the MySQL server. Therefore, if the imported file is too large, it may exceed the value set in this item, resulting in unsuccessful import! Let’s take a look at how to view and set the value of this item.

View the value of Max_allowed_packet

show global variables like 'max_allowed_packet';


+--------------------+---------+
| Variable_name      | Value   |
+--------------------+---------+
| max_allowed_packet | 4194304 |
+--------------------+---------+

You can see that the size of this item is only 4m by default. Next, set the value to 150m (1024 * 1024 * 150)

set global max_allowed_packet=157286400;

View the size again

show global variables like 'max_allowed_packet';

By increasing this value, generally speaking, importing SQL with a large amount of data should be successful again. If an error is still reported, continue to increase it. Please note that setting in the command line is only valid for the current time. After restarting the MySQL service, restore the default value, but you can modify the configuration file (you can add max_allowed_packet = 150m in the configuration file my.cnf) To achieve the purpose of permanent validity, but in fact, we do not often import such a large amount of data, so I think the current configuration can take effect through the command, and there is no need to modify the configuration file.

JMeter JDBC Error: No pool found named: ‘test‘ [How to Solve]

No pool found named

JMeter JDBC error reporting: no pool found named error information reason solution

JMeter JDBC reports an error: no pool found named

Error message

No pool found named: ‘test’, ensure Variable Name matches Variable Name of JDBC Connection Configuration

Reason

The JDBC request needs to connect to the database and needs the corresponding configuration information. If the database configuration information is not found, this error will be reported, Find the configuration information through the value filled in variable name of pool declared in JDBC connection configuration JDBC connection configuration provides the configuration information through the value filled in variable name of created pool. When there are multiple database configuration information, use variable name of pool declared in JDBC connection configuration and variable name of created pool, Ensure that the query request is not confused with the connected database

Solution

Fill the variable name of pool declared in JDBC connection configuration in the JDBC request and the variable name of created pool in the corresponding JDBC connection configuration into the same value

Canal synchronization error target column: name not matched

I. problem description

We have a usage scenario for canal:

Synchronize the same table data from multiple source ends to the same target end for unified data display.

However, it is found that after the field is deleted at source 1, the canal client logs of other sources will report an error:

Target column: name not matched
after that, the SQL operations of this table (such as insert, even if data is not inserted into the deleted field) cannot be synchronized.

II. How to avoid

In the scenario where multiple source ends are performing canal synchronization to the same target end, the drop field is prohibited.