Quirky error 18: transfer closed with outstanding read data rem

Background: The laravel framework used by the company’s new project has no special operation and maintenance at the beginning. It USES guzzle encapsulated get/ POST external request method. When requesting an interface, if the number of pages per page per_page exceeds a certain number, an error will be reported
GuzzleHttp\Exception\RequestException: cURL error 18: transfer closed with outstanding read data remaining (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
A direct request from Postman sent back data, but it didn’t automatically beautify the format, and a request from curl sent back data plus an error message.
 
Solutions:
(Only for the solution of the problem encountered this time, the same error may be caused by different reasons)
Both server Nginx and FPM are run by WWW users. Set Nginx users as WWW and unify the permission configuration of LNMP system:

chown -R www.www /var/lib/nginx/

 
Solution process:
Error message prompt
http://curl.haxx.se/libcurl/c/libcurl-errors.html

CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.

The data transmitted is not consistent with the expected data size.
At first, we used postman to ask for an error in the package method in our project, and there was data sent back. Therefore, we thought there was something wrong with the guzzle method and sent the wrong request header. We spent a lot of time testing the keep-alive, content-Length, Transfer-Encoding: Chunked and other attributes of HTTP header HTTP1.1, but it didn’t work.
Careful use of the curl command to request the interface also found the error message, so I focused on the configuration of the server side. First, I estimated that the maximum amount of data transferred by Nginx or FPM was too small, but the maximum amount of data returned was not more than 100K, which should not exceed the configuration size.
Then search “PHP Nginx HTTP interface returns the maximum content” on Baidu.
https://blog.csdn.net/sakurallj/article/details/51822828
http://www.dewen.net.cn/q/1913
Return if PHP content is too big, nginx will put part of first deposit into a text file/var/lib/nginx/TMP/fastcgi, after receiving all content, such as to be sent to the client. However, nginx execution users do not have write permissions for TMP files under Nginx.
Then I checked nginx’s error_log and found that there was an error message. Later, the user role of LNMP was unified and the problem was solved…

2018/12/13 18:41:31 [crit] 30392#0: *380874 open() "/var/lib/nginx/tmp/fastcgi/3/43/0000000433" failed (13: Permission denied) while reading upstream, client: 182.18.28.66, server: , request: "GET /v1/user/list?page=1&perpage=50 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "123.123.123.13:11008”

Feeling, the user mechanism of Linux is not enough to understand deeply, the HTTP protocol a little understanding,,, encounter weird problems, the first step should always be to check the log!
 

Read More: