readfile
(PHP 4, PHP 5)
readfile — Выводит файл
Описание
Читает файл и записывает его в буфер вывода.
Возвращает количество прочитанных из файла байт. В случае возникновения ошибки вернёт FALSE, если только функция не была вызвана как @readfile(), и выведет сообщение об ошибке.
Для этой функции вы можете использовать URL в качестве имени файла, если была включена опция "fopen wrappers". Смотрите более подробную информацию об определении имени файла в описании функции fopen(), а также список поддерживаемых протоколов URL в List of Supported Protocols/Wrappers.
Вы можете установить в TRUE необязательный второй аргумент, если вы также хотите попытаться найти файл в include_path.
См. также описание функций fpassthru(), file(), fopen(), include(), require(), virtual(), file_get_contents() и List of Supported Protocols/Wrappers.
- PHP Руководство
- Функции по категориям
- Индекс функций
- Справочник функций
- Расширения для работы с файловой системой
- Функции для работы с файловой системой
- basename
- chgrp
- chmod
- chown
- clearstatcache
- copy
- delete
- dirname
- disk_free_space
- disk_total_space
- diskfreespace
- fclose
- feof
- fflush
- fgetc
- fgetcsv
- fgets
- fgetss
- file_exists
- file_get_contents
- file_put_contents
- file
- fileatime
- filectime
- filegroup
- fileinode
- filemtime
- fileowner
- fileperms
- filesize
- filetype
- flock
- fnmatch
- fopen
- fpassthru
- fputcsv
- fputs
- fread
- fscanf
- fseek
- fstat
- ftell
- ftruncate
- fwrite
- glob
- is_dir
- is_executable
- is_file
- is_link
- is_readable
- is_uploaded_file
- is_writable
- is_writeable
- lchgrp
- lchown
- link
- linkinfo
- lstat
- mkdir
- move_uploaded_file
- parse_ini_file
- parse_ini_string
- pathinfo
- pclose
- popen
- readfile
- readlink
- realpath_cache_get
- realpath_cache_size
- realpath
- rename
- rewind
- rmdir
- set_file_buffer
- stat
- symlink
- tempnam
- tmpfile
- touch
- umask
- unlink
Коментарии
Remember if you make a "force download" script like mentioned below that you SANITIZE YOUR INPUT!
I have seen a lot of download scripts that does not test so you are able to download anything you want on the server.
Test especially for strings like ".." which makes directory traversal possible. If possible only permit characters a-z, A-Z and 0-9 and make it possible to only download from one "download-folder".
Beware - the chunky readfile suggested by Rob Funk can easily exceed you maximum script execution time (30 seconds by default).
I suggest you to use the set_time_limit function inside the while loop to reset the php watchdog.
I think that readfile suffers from the maximum script execution time. The readfile is always completed even if it exceed the default 30 seconds limit, then the script is aborted.
Be warned that you can get very odd behaviour not only on large files, but also on small files if the user has a slow connection.
The best thing to do is to use
<?
set_time_limit(0);
?>
just before the readfile, to disable completely the watchdog if you intend to use the readfile call to tranfer a file to the user.
regarding php5:
i found out that there is already a disscussion @php-dev about readfile() and fpassthru() where only exactly 2 MB will be delivered.
so you may use this on php5 to get lager files
<?php
function readfile_chunked($filename,$retbytes=true) {
$chunksize = 1*(1024*1024); // how many bytes per chunk
$buffer = '';
$cnt =0;
// $handle = fopen($filename, 'rb');
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunksize);
echo $buffer;
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
?>
In response to flowbee@gmail.com --
When using the readfile_chunked function noted here with files larger than 10MB or so I am still having memory errors. It's because the writers have left out the all important flush() after each read. So this is the proper chunked readfile (which isn't really readfile at all, and should probably be crossposted to passthru(), fopen(), and popen() just so browsers can find this information):
<?php
function readfile_chunked($filename,$retbytes=true) {
$chunksize = 1*(1024*1024); // how many bytes per chunk
$buffer = '';
$cnt =0;
// $handle = fopen($filename, 'rb');
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, $chunksize);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
?>
All I've added is a flush(); after the echo line. Be sure to include this!
Just a note: If you're using bw_mod (current version 0.6) to limit bandwidth in Apache 2, it *will not* limit bandwidth during readfile events.
Using pieces of the forced download script, adding in MySQL database functions, and hiding the file location for security was what we needed for downloading wmv files from our members creations without prompting Media player as well as secure the file itself and use only database queries. Something to the effect below, very customizable for private access, remote files, and keeping order of your online media.
<?
# Protect Script against SQL-Injections
$fileid=intval($_GET[id]);
# setup SQL statement
$sql = " SELECT id, fileurl, filename, filesize FROM ibf_movies WHERE id=' $fileid' ";
# execute SQL statement
$res = mysql_query($sql);
# display results
while ($row = mysql_fetch_array($res)) {
$fileurl = $row['fileurl'];
$filename= $row['filename'];
$filesize= $row['filesize'];
$file_extension = strtolower(substr(strrchr($filename,"."),1));
switch ($file_extension) {
case "wmv": $ctype="video/x-ms-wmv"; break;
default: $ctype="application/force-download";
}
// required for IE, otherwise Content-disposition is ignored
if(ini_get('zlib.output_compression'))
ini_set('zlib.output_compression', 'Off');
header("Pragma: public");
header("Expires: 0");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-Type: video/x-ms-wmv");
header("Content-Type: $ctype");
header("Content-Disposition: attachment; filename=\"".basename($filename)."\";");
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".@filesize($filename));
set_time_limit(0);
@readfile("$fileurl") or die("File not found.");
}
$donwloaded = "downloads + 1";
if ($_GET["hit"]) {
mysql_query("UPDATE ibf_movies SET downloads = $donwloaded WHERE id=' $fileid'");
}
?>
While at it I added into download.php a hit (download) counter. Of course you need to setup the DB, table, and columns. Email me for Full setup// Session marker is also a security/logging option
Used in the context of linking:
http://www.yourdomain.com/download.php?id=xx&hit=1
[Edited by sp@php.net: Added Protection against SQL-Injection]
A mime-type-independent forced download can also be conducted by using:
<?
(...)
header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // some day in the past
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Content-type: application/x-download");
header("Content-Disposition: attachment; filename={$new_name}");
header("Content-Transfer-Encoding: binary");
?>
Cheers,
Peavey
Instead of using
<?php
header('Content-Type: application/force-download');
?>
use
<?php
header('Content-Type: application/octet-stream');
?>
Some browsers have troubles with force-download.
To avoid the risk of choosing themselves which files to download by messing with the request and doing things like inserting "../" into the "filename", simply remember that URLs are not file paths, and there's no reason why the mapping between them has to be so literal as "download.php?file=thingy.mpg" resulting in the download of the file "thingy.mpg".
It's your script and you have full control over how it maps file requests to file names, and which requests retrieve which files.
But even then, as ever, never trust ANYTHING in the request. Basic first-day-at-school security principle, that.
To anyone that's had problems with Readfile() reading large files into memory the problem is not Readfile() itself, it's because you have output buffering on. Just turn off output buffering immediately before the call to Readfile(). Use something like ob_end_flush().
My script working correctly on IE6 and Firefox 2 with any typ e of files (I hope :))
function DownloadFile($file) { // $file = include path
if(file_exists($file)) {
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename='.basename($file));
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
header('Content-Length: ' . filesize($file));
ob_clean();
flush();
readfile($file);
exit;
}
}
Run on Apache 2 (WIN32) PHP5
if you need to limit download rate, use this code
<?php
$local_file = 'file.zip';
$download_file = 'name.zip';
// set the download rate limit (=> 20,5 kb/s)
$download_rate = 20.5;
if(file_exists($local_file) && is_file($local_file))
{
header('Cache-control: private');
header('Content-Type: application/octet-stream');
header('Content-Length: '.filesize($local_file));
header('Content-Disposition: filename='.$download_file);
flush();
$file = fopen($local_file, "r");
while(!feof($file))
{
// send the current file part to the browser
print fread($file, round($download_rate * 1024));
// flush the content to the browser
flush();
// sleep one second
sleep(1);
}
fclose($file);}
else {
die('Error: The file '.$local_file.' does not exist!');
}
?>
Send file with HTTPRange support (partial download):
<?php
function smartReadFile($location, $filename, $mimeType='application/octet-stream')
{ if(!file_exists($location))
{ header ("HTTP/1.0 404 Not Found");
return;
}
$size=filesize($location);
$time=date('r',filemtime($location));
$fm=@fopen($location,'rb');
if(!$fm)
{ header ("HTTP/1.0 505 Internal server error");
return;
}
$begin=0;
$end=$size;
if(isset($_SERVER['HTTP_RANGE']))
{ if(preg_match('/bytes=\h*(\d+)-(\d*)[\D.*]?/i', $_SERVER['HTTP_RANGE'], $matches))
{ $begin=intval($matches[0]);
if(!empty($matches[1]))
$end=intval($matches[1]);
}
}
if($begin>0||$end<$size)
header('HTTP/1.0 206 Partial Content');
else
header('HTTP/1.0 200 OK');
header("Content-Type: $mimeType");
header('Cache-Control: public, must-revalidate, max-age=0');
header('Pragma: no-cache');
header('Accept-Ranges: bytes');
header('Content-Length:'.($end-$begin));
header("Content-Range: bytes $begin-$end/$size");
header("Content-Disposition: inline; filename=$filename");
header("Content-Transfer-Encoding: binary\n");
header("Last-Modified: $time");
header('Connection: close');
$cur=$begin;
fseek($fm,$begin,0);
while(!feof($fm)&&$cur<$end&&(connection_status()==0))
{ print fread($fm,min(1024*16,$end-$cur));
$cur+=1024*16;
}
}
?>
Usage:
<?php
smartReadFile("/tmp/filename","myfile.mp3","audio/mpeg")
?>
It can be slow for big files to read by fread, but this is a single way to read file in strict bounds. You can modify this and add fpassthru instead of fread and while, but it sends all data from begin --- it would be not fruitful if request is bytes from 100 to 200 from 100mb file.
A note on the smartReadFile function from gaosipov:
Change the indexes on the preg_match matches to:
$begin = intval($matches[1]);
if( !empty($matches[2]) ) {
$end = intval($matches[2]);
}
Otherwise the $begin would be set to the entire section matched and the $end to what should be the begin.
See preg_match for more details on this.
If you are using the procedures outlined in this article to force sending a file to a user, you may find that the "Content-Length" header is not being sent on some servers.
The reason this occurs is because some servers are setup by default to enable gzip compression, which sends an additional header for such operations. This additional header is "Transfer-Encoding: chunked" which essentially overrides the "Content-Length" header and forces a chunked download. Of course, this is not required if you are using the intelligent versions of readfile in this article.
A missing Content-Length header implies the following:
1) Your browser will not show a progress bar on downloads because it doesn't know their length
2) If you output anything (e.g. white space) after the readfile function (by mistake), the browser will add that to the end of the download, resulting in corrupt data.
The easiest way to disable this behaviour is with the following .htaccess directive.
SetEnv no-gzip dont-vary
If you are lucky enough to not be on shared hosting and have apache, look at installing mod_xsendfile.
This was the only way I found to both protect and transfer very large files with PHP (gigabytes).
It's also proved to be much faster for basically any file.
Available directives have changed since the other note on this and XSendFileAllowAbove was replaced with XSendFilePath to allow more control over access to files outside of webroot.
Download the source.
Install with: apxs -cia mod_xsendfile.c
Add the appropriate configuration directives to your .htaccess or httpd.conf files:
# Turn it on
XSendFile on
# Whitelist a target directory.
XSendFilePath /tmp/blah
Then to use it in your script:
<?php
$file = '/tmp/blah/foo.iso';
$download_name = basename($file);
if (file_exists($file)) {
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename='.$download_name);
header('X-Sendfile: '.$file);
exit;
}
?>
Just a note for those who face problems on names containing spaces (e.g. "test test.pdf").
In the examples (99% of the time) you can find
header('Content-Disposition: attachment; filename='.basename($file));
but the correct way to set the filename is quoting it (double quote):
header('Content-Disposition: attachment; filename="'.basename($file).'"' );
Some browsers may work without quotation, but for sure not Firefox and as Mozilla explains, the quotation of the filename in the content-disposition is according to the RFC
http://kb.mozillazine.org/Filenames_with_spaces_are_truncated_upon_download
If you are looking for an algorithm that will allow you to download (force download) a big file, may this one will help you.
$filename = "file.csv";
$filepath = "/path/to/file/" . $filename;
// Close sessions to prevent user from waiting until
// download will finish (uncomment if needed)
//session_write_close();
set_time_limit(0);
ignore_user_abort(false);
ini_set('output_buffering', 0);
ini_set('zlib.output_compression', 0);
$chunk = 10 * 1024 * 1024; // bytes per chunk (10 MB)
$fh = fopen($filepath, "rb");
if ($fh === false) {
echo "Unable open file";
}
header('Content-Description: File Transfer');
header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename="' . $filename . '"');
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header('Content-Length: ' . filesize($filepath));
// Repeat reading until EOF
while (!feof($fh)) {
echo fread($handle, $chunk);
ob_flush(); // flush output
flush();
}
exit;
In the C source, this function simply opens the path in read+binary mode, without a lock, and uses fpassthru()
If you need a locked read, use fopen(), flock(), and then fpassthru() directly.
Always using MIME-Type 'application/octet-stream' is not optimal. Most if not all browsers will simply download files with that type.
If you use proper MIME types (and inline Content-Disposition), browsers will have better default actions for some of them. Eg. in case of images, browsers will display them, which is probably what you'd want.
To deliver the file with the proper MIME type, the easiest way is to use:
header('Content-Type: ' . mime_content_type($file));
header('Content-Disposition: inline; filename="'.basename($file).'"');
To avoid errors,
just be careful whether slash "/" is allowed or not at the beginning of $file_name parameter.
In my case, trying to send PDF files thru PHP after access-logging,
the beginning "/" must be removed in PHP 7.1.
flobee.at.gmail.dot.com shared "readfile_chunked" function. It does work, but you may encounter memory exhaustion using "fread". Meanwhile "stream_copy_to_stream" seems to utilize the same amount of memory as "readfile". At least, when I was testing "download" function for my https://github.com/Simbiat/HTTP20 library on 1.5G file with 256M memory limitation that was the case: "fread" I got peak memory usage of ~240M, while with "stream_copy_to_stream" - ~150M.
It does not mean that you can fully escape memory exhaustion, though: if you are reading too much at a time, you can still encounter it. That is why in my library I use a helper function ("speedLimit") to calculate whether selected speed limit will fit the available memory (while allowing some headroom).
You can read comments in the code itself for more details and raise issues for the library, if you think something is incorrect there (especially since it's WIP at the moment of writing this), but so far I am able to get consistent behavior with it.
For anyone having the problem of your html page being outputted in the downloaded file: call the functions ob_clean() and flush() before readfile()