Category Archives: How to Fix

Apple computer download file automatically with suffix problem solving

# download files automatically to avoid apple computer with suffix
discovered an interesting document, download a plain text file will not win with suffix, but MAC download will take a suffix, and asked for direct download, file download time cannot be browser opens, as shown in figure:

in the figure, found that under the MAC apple is, according to official provided in the download file but the download your configuration file is plain text document.
default_type application/json;
;
add_header content-disposition attachment
add_header content-disposition attachment
add_header content-disposition attachment
add_header content-disposition attachment
add_header content-disposition attachment
add_header content-disposition attachment
add_header Content-Type application/octet-stream;
and default_type is not configured. This allows Mac to download files directly without suffixes.

Microsoft edge was unable to log in. Error code: 3,15, – 2147023579

PS: I tried a lot of methods on the Internet are not good, I found that this can be solved, so I hope to help more people
If you use Win10 to download the new version of Edge browser, landing account found that can not login, may be the computer did not set Microsoft account.
My operation is:
Open the old version of Edge browser – Settings – Account and click on it. It will jump to the system’s Add Account interface and enter your Microsoft account to log in.
Then I looked at the old version of Edge and found that it had already logged in and synced automatically.
At this point, open the new version of Edge browser and click Login. Select Microsoft Account. A pop-up interface will automatically fill in the account you just logged in.
Point next step, enter password again, discover won’t report wrong, can land normally.
Conclusion: This error may be caused by the failure to check the local Microsoft account and the failure to automatically complete the account.
 

Ubuntu starts Xilinx vivado

1, if you are the current user is hadoop , enter the directory is /home/hadoop
/ code>

3, the end of the file with source/opt/Xilinx/Vivado/2016.4/settings64. Sh
4. If not, start the file source.bashrc

Install in Python 3. X web.py

Install web.py in Python 3.x
Recently decided to move from Python 2.7 to work on 3.x.
using the database, still chose before more interested in web. Py
but seemed to have all sorts of problems found when installation.

ImporError: No module named ‘utils’
ModuleNotFoundError: No module named ‘db’

I finally decided to give the dev version a try.

pip install web.py==0.40.dev0

It turns out that the dev version of web.py works perfectly on Python 3.x.
I personally tested Python 3.6
The code is as follows:

import pymysql

pymysql.install_as_MySQLdb()
import web

db = web.database(dbn='mysql', host='db_host', port=3306,
                  user='root', pw='password', db='db_name', charset='utf8')

results = db.query('select * from user where id = 1;')

for user in results:
    print(user.name)
    print(user.id_no)

Hopefully this article will help those of you who are looking for web.py available on Python 3.x.

Group by operator of hive execution plan

Group By Operator</code bbb>oup aggregate, common attribute

aggregations、grouping is for which aggregation function
mode, generally hash, computes the hash of keys
keys When there is no keys attribute, there is only one grouping.
outputColumnNames Temporary column names for output

For example

 explain select sum(sal) from tb_emp;

Look at its Group By Operator

+---------------------------------------------------------------------------------------------+
|Explain                                                                                      |
+---------------------------------------------------------------------------------------------+
|              Group By Operator                                                              |
|                aggregations: sum(sal)                                                       |
|                mode: hash                                                                   |
|                outputColumnNames: _col0                                                     |
|                Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE|
+---------------------------------------------------------------------------------------------+

Again for instance

explain select deptno,sum(sal) from tb_emp group by deptno;

Look at its Group By Operator</code bbb>

+------------------------------------------------------------------------------------------------+
|Explain                                                                                         |
+------------------------------------------------------------------------------------------------+
|              Group By Operator                                                                 |
|                aggregations: sum(sal)                                                          |
|                keys: deptno (type: int)                                                        |
|                mode: hash                                                                      |
|                outputColumnNames: _col0, _col1                                                 |
|                Statistics: Num rows: 89 Data size: 718 Basic stats: COMPLETE Column stats: NONE|
+------------------------------------------------------------------------------------------------+

The group by implementation principle
The process of transforming a GROUP BY task into a MR task is as follows:

Map: Generate key-value pairs, using the column in the GROUP BY condition as the Key and the result of the aggregation function as the Value
Shuffle: Hash according to the value of the Key, and send the key-value pairs to different Reducers according to the Hash value
Reduce: Reduce based on the columns of the SELECT clause and the aggregation function

conclusion
Group By Operator</code bbb>s four attributes. g> By Operator can als>ve Group By oper> . Group by>rator
reference
Group by Execution Plan Analysis (Hive

Higher order components in react

High-order components (Hoc) is an advanced technique used to reuse component logic in React. Hoc itself is not part of the React API, but rather a design pattern based on React’s combinatorial features.
In particular, a higher-order component is a function that takes an argument to a component and returns a value to the new component.

Components convert props to UI, and higher-order components convert components to another component.

const EnhancedComponent = higherOrderComponent(WrappedComponent);

InnoDB, tokudb, MyISAM directory structure

Innodb
Physically, InnoDB tables consist of shared tablespaces, log file groups (redo file groups), and table structure definition files.
innodb has a relatively different directory structure, divided into shared tablespaces, separate tablespaces.
The type is controlled by the parameter innodb_file_per_table. 0: Use shared tablespace;
show variables like “innodb_file_per_table”; See the file directories under
in the data_dir definition.
Tablespace independent
Separate tablespaces are enabled. Each database creates a file of the same name to store table structure files, index files, and data files. However, undo rollback logs to transactions and redo log buffers are still stored in the shared tablespace.
Table_name.frm defines the table structure.
table_name.ibd Stores table indexes and data.
Advantages:

    Each table has its own independent table space, the data and index of each table will exist in its own table space, you can achieve a single table in different databases to move. Space can be reclaimed (except for the DROP TABLE operation, where a table empty cannot be reclaimed by itself). Alter table TableName engine=innodb alter table TableName engine=innodb alter table TableName engine=innodb; Retract unused space. Using Turncate Table for InnoDB with innodb-plugin also shrinks the space. For tables that use separate tablespaces, no matter how they are dropped, tablespace fragmentation will not have a significant impact on performance, and there is still a chance to handle it.

Disadvantages:

    single table increase is too large, when the single table occupies too much space, the storage space is insufficient, can only think about the solution from the operating system level, the maximum limit of table space is 64TB.

SHARED TABLESPACE:
If no separate tablespaces are enabled, they are all stored in IBDATA1. You can set its size and automatically expand it when it exceeds the limit size.
Advantages:
The

    table space can be split into multiple files for each disk, so the table can be split into multiple files for each disk. The size of the table is not limited by the disk size. Data and files are put together for easy management.

Disadvantages:

    all data and index to a file, while it is possible to put a large file into multiple small files, but multiple tables and indexes mixed stored in table space, when is a large amount of data, made a lot of delete table after table space will have a lot of space, especially for statistical analysis, for often delete operation of this kind of application the most weak Shared table space. Can’t bounce back after sharing a table space distribution: when there is a temporary indexed or create a temporary table operating table space is enlarged, is to delete related tables didn’t also the way to retract the part space (can be understood as oracle 10 g of table space, but only use 10 m, but the operating system shows the mysql table space for 10 g), database of cold standby is slow.

MySQL has a “double write” mechanism for writing data pages.
MySQL has a “double write” mechanism for writing data pages.
MySQL has a “double write” mechanism for writing data pages. The redo log records page operations at the physical level, and now the page is only 4KB written, which is itself a “faulty” page, so the redo log records the page writes in error. Thus, there is a double write: the pages are copied to the double write buffer, then the pages are written to the shared tablespaces in order, and finally a copy is written to the corresponding tablespaces.
Tokudb
When TOKUDB is started, it reads TOKUDB.DIRECTORY, organizes the table related files according to the key information, and writes them to the INFORMATION_SCHEMA. TOKUDB_FILE_MAP table.
Tokudb. directory defines table/index file information.
tokudb. Environment tokudb version number information.
tokudb.rollback undo record .
log000000000009 tokulog27 redo records.
tokudb_lock_dont_delete_me_* file lock ensures that the same datadir can only be used by one TokuDB process.

_test_table_name_key_name_45ca56_3_1b_b_0. tokudb index file
myisam_table.MYD table data
isam_table. MYI table index

RuntimeError: log_vml_cpu not implemented for ‘Long’

welcome to my blog
Problem description
Implements Torch. Log (tor.from_numpy (NP.array ([1,2,2]))))t implemented for ‘Long’
why
Long data does not support log operations. Why is a Tensor a Long?Since the numpy array is created without specifying a dtype, int64 is used by default, so when the numpy array is converted to torch.tensor, the data type becomes Long
The solution
Reset torch.log(torch.from_numpy(np.array([1,2,2],np.float)))