proc_open
(PHP 4 >= 4.3.0, PHP 5)
proc_open — Execute a command and open file pointers for input/output
Описание
proc_open() is similar to popen() but provides a much greater degree of control over the program execution.
Список параметров
- cmd
-
The command to execute
- descriptorspec
-
An indexed array where the key represents the descriptor number and the value represents how PHP will pass that descriptor to the child process. 0 is stdin, 1 is stdout, while 2 is stderr.
The currently supported pipe types are file and pipe .
The file descriptor numbers are not limited to 0, 1 and 2 - you may specify any valid file descriptor number and it will be passed to the child process. This allows your script to interoperate with other scripts that run as "co-processes". In particular, this is useful for passing passphrases to programs like PGP, GPG and openssl in a more secure manner. It is also useful for reading status information provided by those programs on auxiliary file descriptors.
- pipes
-
Will be set to an indexed array of file pointers that correspond to PHP's end of any pipes that are created.
- cwd
-
The initial working dir for the command. This must be an absolute directory path, or NULL if you want to use the default value (the working dir of the current PHP process)
- env
-
An array with the environment variables for the command that will be run, or NULL to use the same environment as the current PHP process
- other_options
-
Allows you to specify additional options. Currently supported options include:
- suppress_errors (windows only): suppresses errors generated by this function when it's set to TRUE
- bypass_shell (windows only): bypass cmd.exe shell when set to TRUE
- context: stream context used when opening files (created with stream_context_create())
- binary_pipes: open pipes in binary mode, instead of using the usual stream_encoding
Возвращаемые значения
Returns a resource representing the process, which should be freed using proc_close() when you are finished with it. On failure returns FALSE.
Список изменений
Версия | Описание |
---|---|
6.0.0 | Added the context and binary_pipes options to the other_options parameter. |
5.2.1 | Added the bypass_shell option to the other_options parameter. |
5.0.0 | Added the cwd , env and other_options parameters. |
Примеры
Пример #1 A proc_open() example
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);
$cwd = '/tmp';
$env = array('some_option' => 'aeiou');
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0], '<?php print_r($_ENV); ?>');
fclose($pipes[0]);
echo stream_get_contents($pipes[1]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
echo "command returned $return_value\n";
}
?>
Результатом выполнения данного примера будет что-то подобное:
Array ( [some_option] => aeiou [PWD] => /tmp [SHLVL] => 1 [_] => /usr/local/bin/php ) command returned 0
Примечания
Замечание: Windows compatibility: Descriptors beyond 2 (stderr) are made available to the child process as inheritable handles, but since the Windows architecture does not associate file descriptor numbers with low-level handles, the child process does not (yet) have a means of accessing those handles. Stdin, stdout and stderr work as expected.
Замечание: If you only need a uni-directional (one-way) process pipe, use popen() instead, as it is much easier to use.
Коментарии
Just a small note in case it isn't obvious, its possible to treat the filename as in fopen, thus you can pass through the standard input from php like
$descs = array (
0 => array ("file", "php://stdin", "r"),
1 => array ("pipe", "w"),
2 => array ("pipe", "w")
);
$proc = proc_open ("myprogram", $descs, $fp);
Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.
Stream functions can be used on pipes like :
- pipes from popen, proc_open
- pipes from fopen('php://stdin') (or stdout)
- sockets (unix or tcp/udp)
- many other things probably but the most important is here
More informations about streams (you'll find many useful functions there) :
ref.stream
The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".
If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.
A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.
Just imagine the following example:
<?
/* assume that strlen($in) is about 30k
*/
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/error-output.txt", "a")
);
$process = proc_open("cat", $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $in);
/* fwrite writes to stdin, 'cat' will immediately write the data from stdin
* to stdout and blocks, when the stdout buffer is full. Then it will not
* continue reading from stdin and php will block here.
*/
fclose($pipes[0]);
while (!feof($pipes[1])) {
$out .= fgets($pipes[1], 1024);
}
fclose($pipes[1]);
$return_value = proc_close($process);
}
?>
proc_open is hard coded to use "/bin/sh". So if you're working in a chrooted environment, you need to make sure that /bin/sh exists, for now.
The above note on Windows compatibility is not entirely correct.
Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).
These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.
However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.
Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.
<?
function authenticate($user,$password) {
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("file","/dev/null", "w") // stderr is a file to write to
);
$process = proc_open("su ".escapeshellarg($user), $descriptorspec, $pipes);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be appended to /tmp/error-output.txt
fwrite($pipes[0],$password);
fclose($pipes[0]);
fclose($pipes[1]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
$return_value = proc_close($process);
return !$return_value;
}
}
?>
The pty option is actually disabled in the source for some reason via a #if 0 && condition. I'm not sure why it's disabled. I removed the 0 && and recompiled, after which the pty option works perfectly. Just a note.
I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:
// fgetsPending( $in,$tv_sec ) - Get a pending line of data from stream $in, waiting a maximum of $tv_sec seconds
function fgetsPending(&$in,$tv_sec=10) {
if ( stream_select($read = array($in),$write=NULL,$except=NULL,$tv_sec) ) return fgets($in);
else return FALSE;
}
If you are going to allow data coming from user input to be passed to this function, then you should keep in mind the following warning that also applies to exec() and system():
function.exec
function.system
Warning:
If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.
STDIN STDOUT example
test.php
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "r")
);
$process = proc_open('php test_gen.php', $descriptorspec, $pipes, null, null); //run test_gen.php
echo ("Start process:\n");
if (is_resource($process))
{
fwrite($pipes[0], "start\n"); // send start
echo ("\n\nStart ....".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "get\n"); // send get
echo ("Get: ".fgets($pipes[1],4096)); //get answer
fwrite($pipes[0], "stop\n"); //send stop
echo ("\n\nStop ....".fgets($pipes[1],4096)); //get answer
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$return_value = proc_close($process); //stop test_gen.php
echo ("Returned:".$return_value."\n");
}
?>
test_gen.php
<?php
$keys=0;
function play_stop()
{
global $keys;
$stdin_stat_arr=fstat(STDIN);
if($stdin_stat_arr[size]!=0)
{
$val_in=fread(STDIN,4096);
switch($val_in)
{
case "start\n": echo "Started\n";
return false;
break;
case "stop\n": echo "Stopped\n";
$keys=0;
return false;
break;
case "pause\n": echo "Paused\n";
return false;
break;
case "get\n": echo ($keys."\n");
return true;
break;
default: echo("Передан не верный параметр: ".$val_in."\n");
return true;
exit();
}
}else{return true;}
}
while(true)
{
while(play_stop()){usleep(1000);}
while(play_stop()){$keys++;usleep(10);}
}
?>
I needed to emulate a tty for a process (it wouldnt write to stdout or read from stdin), so I found this:
<?php
$descriptorspec = array(0 => array('pty'),
1 => array('pty'),
2 => array('pty'));
?>
pipes are bidirectional then
It took me a long time (and three consecutive projects) to figure this out. Because popen() and proc_open() return valid processes even when the command failed it's awkward to determine when it really has failed if you're opening a non-interactive process like "sendmail -t".
I had previously guess that reading from STDERR immediately after starting the process would work, and it does... but when the command is successful PHP just hangs because STDERR is empty and it's waiting for data to be written to it.
The solution is a simple stream_set_blocking($pipes[2], 0) immediately after calling proc_open().
<?php
$this->_proc = proc_open($command, $descriptorSpec, $pipes);
stream_set_blocking($pipes[2], 0);
if ($err = stream_get_contents($pipes[2]))
{
throw new Swift_Transport_TransportException(
'Process could not be started [' . $err . ']'
);
}
?>
If the process is opened successfully $pipes[2] will be empty, but if it failed the bash/sh error will be in it.
Finally I can drop all my "workaround" error checking.
I realise this solution is obvious and I'm not sure how it took me 18 months to figure it out, but hopefully this will help someone else.
NOTE: Make sure your descriptorSpec has ( 2 => array('pipe', 'w')) for this to work.
I'm confused by the direction of the pipes. Most of the examples in this documentation opens pipe #2 as "r", because they want to read from stderr. That sounds logical to me, and that's what I tried to do. That didn't work, though. When I changed it to w, as in
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open(escapeshellarg($scriptFile), $descriptorspec, $pipes, $this->wd);
...
while (!feof($pipes[1])) {
foreach($pipes as $key =>$pipe) {
$line = fread($pipe, 128);
if($line) {
print($line);
$this->log($line);
}
}
sleep(0.5);
}
...
?>
everything works fine.
To complete the examples below that use proc_open to encrypt a string using GPG, here is a decrypt function:
<?php
function gpg_decrypt($string, $secret) {
$homedir = ''; // path to you gpg keyrings
$tmp_file = '/tmp/gpg_tmp.asc' ; // tmp file to write to
file_put_contents($tmp_file, $string);
$text = '';
$error = '';
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr ?? instead of a file
);
$command = 'gpg --homedir ' . $homedir . ' --batch --no-verbose --passphrase-fd 0 -d ' . $tmp_file . ' ';
$process = proc_open($command, $descriptorspec, $pipes);
if (is_resource($process)) {
fwrite($pipes[0], $secret);
fclose($pipes[0]);
while($s= fgets($pipes[1], 1024)) {
// read from the pipe
$text .= $s;
}
fclose($pipes[1]);
// optional:
while($s= fgets($pipes[2], 1024)) {
$error .= $s . "\n";
}
fclose($pipes[2]);
}
file_put_contents($tmp_file, '');
if (preg_match('/decryption failed/i', $error)) {
return false;
} else {
return $text;
}
}
?>
I managed to make a set of functions to work with GPG, since my hosting provider refused to use GPG-ME.
Included below is an example of decryption using a higher descriptor to push a passphrase.
Comments and emails welcome. :)
<?php
function GPGDecrypt($InputData, $Identity, $PassPhrase, $HomeDir="~/.gnupg", $GPGPath="/usr/bin/gpg") {
if(!is_executable($GPGPath)) {
trigger_error($GPGPath . " is not executable",
E_USER_ERROR);
die();
} else {
// Set up the descriptors
$Descriptors = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w"),
3 => array("pipe", "r") // This is the pipe we can feed the password into
);
// Build the command line and start the process
$CommandLine = $GPGPath . ' --homedir ' . $HomeDir . ' --quiet --batch --local-user "' . $Identity . '" --passphrase-fd 3 --decrypt -';
$ProcessHandle = proc_open( $CommandLine, $Descriptors, $Pipes);
if(is_resource($ProcessHandle)) {
// Push passphrase to custom pipe
fwrite($Pipes[3], $PassPhrase);
fclose($Pipes[3]);
// Push input into StdIn
fwrite($Pipes[0], $InputData);
fclose($Pipes[0]);
// Read StdOut
$StdOut = '';
while(!feof($Pipes[1])) {
$StdOut .= fgets($Pipes[1], 1024);
}
fclose($Pipes[1]);
// Read StdErr
$StdErr = '';
while(!feof($Pipes[2])) {
$StdErr .= fgets($Pipes[2], 1024);
}
fclose($Pipes[2]);
// Close the process
$ReturnCode = proc_close($ProcessHandle);
} else {
trigger_error("cannot create resource", E_USER_ERROR);
die();
}
}
if (strlen($StdOut) >= 1) {
if ($ReturnCode <= 0) {
$ReturnValue = $StdOut;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr . "\n\nStandard Output Follows:\n\n";
}
} else {
if ($ReturnCode <= 0) {
$ReturnValue = $StdErr;
} else {
$ReturnValue = "Return Code: " . $ReturnCode . "\nOutput on StdErr:\n" . $StdErr;
}
}
return $ReturnValue;
}
?>
Note that when you call an external script and retrieve large amounts of data from STDOUT and STDERR, you may need to retrieve from both alternately in non-blocking mode (with appropriate pauses if no data is retrieved), so that your PHP script doesn't lock up. This can happen if you waiting on activity on one pipe while the external script is waiting for you to empty the other, e.g:
<?php
$read_output = $read_error = false;
$buffer_len = $prev_buffer_len = 0;
$ms = 10;
$output = '';
$read_output = true;
$error = '';
$read_error = true;
stream_set_blocking($pipes[1], 0);
stream_set_blocking($pipes[2], 0);
// dual reading of STDOUT and STDERR stops one full pipe blocking the other, because the external script is waiting
while ($read_error != false or $read_output != false)
{
if ($read_output != false)
{
if(feof($pipes[1]))
{
fclose($pipes[1]);
$read_output = false;
}
else
{
$str = fgets($pipes[1], 1024);
$len = strlen($str);
if ($len)
{
$output .= $str;
$buffer_len += $len;
}
}
}
if ($read_error != false)
{
if(feof($pipes[2]))
{
fclose($pipes[2]);
$read_error = false;
}
else
{
$str = fgets($pipes[2], 1024);
$len = strlen($str);
if ($len)
{
$error .= $str;
$buffer_len += $len;
}
}
}
if ($buffer_len > $prev_buffer_len)
{
$prev_buffer_len = $buffer_len;
$ms = 10;
}
else
{
usleep($ms * 1000); // sleep for $ms milliseconds
if ($ms < 160)
{
$ms = $ms * 2;
}
}
}
return proc_close($process);
?>
Display output (stdout/stderr) in real time, and get the real exit code in pure PHP (no shell workaround!). It works well on my machines (debian mostly).
#!/usr/bin/php
<?php
/*
* Execute and display the output in real time (stdout + stderr).
*
* Please note this snippet is prepended with an appropriate shebang for the
* CLI. You can re-use only the function.
*
* Usage example:
* chmod u+x proc_open.php
* ./proc_open.php "ping -c 5 google.fr"; echo RetVal=$?
*/
define(BUF_SIZ, 1024); # max buffer size
define(FD_WRITE, 0); # stdin
define(FD_READ, 1); # stdout
define(FD_ERR, 2); # stderr
/*
* Wrapper for proc_*() functions.
* The first parameter $cmd is the command line to execute.
* Return the exit code of the process.
*/
function proc_exec($cmd)
{
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$ptr = proc_open($cmd, $descriptorspec, $pipes, NULL, $_ENV);
if (!is_resource($ptr))
return false;
while (($buffer = fgets($pipes[FD_READ], BUF_SIZ)) != NULL
|| ($errbuf = fgets($pipes[FD_ERR], BUF_SIZ)) != NULL) {
if (!isset($flag)) {
$pstatus = proc_get_status($ptr);
$first_exitcode = $pstatus["exitcode"];
$flag = true;
}
if (strlen($buffer))
echo $buffer;
if (strlen($errbuf))
echo "ERR: " . $errbuf;
}
foreach ($pipes as $pipe)
fclose($pipe);
/* Get the expected *exit* code to return the value */
$pstatus = proc_get_status($ptr);
if (!strlen($pstatus["exitcode"]) || $pstatus["running"]) {
/* we can trust the retval of proc_close() */
if ($pstatus["running"])
proc_terminate($ptr);
$ret = proc_close($ptr);
} else {
if ((($first_exitcode + 256) % 256) == 255
&& (($pstatus["exitcode"] + 256) % 256) != 255)
$ret = $pstatus["exitcode"];
elseif (!strlen($first_exitcode))
$ret = $pstatus["exitcode"];
elseif ((($first_exitcode + 256) % 256) != 255)
$ret = $first_exitcode;
else
$ret = 0; /* we "deduce" an EXIT_SUCCESS ;) */
proc_close($ptr);
}
return ($ret + 256) % 256;
}
/* __init__ */
if (isset($argv) && count($argv) > 1 && !empty($argv[1])) {
if (($ret = proc_exec($argv[1])) === false)
die("Error: not enough FD or out of memory.\n");
elseif ($ret == 127)
die("Command not found (returned by sh).\n");
else
exit($ret);
}
?>
It seems that stream_get_contents() on STDOUT blocks infinitly under Windows when STDERR is filled under some circumstances.
The trick is to open STDERR in append mode ("a"), then this will work, too.
<?php
$descriptorspec = array(
0 => array('pipe', 'r'), // stdin
1 => array('pipe', 'w'), // stdout
2 => array('pipe', 'a') // stderr
);
?>
There is some smart object Processes Manager which i have created for my application. It can control the maximum of simultaneously running processes.
Proccesmanager class:
<?php
class Processmanager {
public $executable = "C:\\www\\_PHP5_2_10\\php";
public $root = "C:\\www\\parallelprocesses\\";
public $scripts = array();
public $processesRunning = 0;
public $processes = 3;
public $running = array();
public $sleep_time = 2;
function addScript($script, $max_execution_time = 300) {
$this->scripts[] = array("script_name" => $script,
"max_execution_time" => $max_execution_time);
}
function exec() {
$i = 0;
for(;;) {
// Fill up the slots
while (($this->processesRunning<$this->processes) and ($i<count($this->scripts))) {
echo "<span style='color: orange;'>Adding script: ".$this->scripts[$i]["script_name"]."</span><br />";
ob_flush();
flush();
$this->running[] =& new Process($this->executable, $this->root, $this->scripts[$i]["script_name"], $this->scripts[$i]["max_execution_time"]);
$this->processesRunning++;
$i++;
}
// Check if done
if (($this->processesRunning==0) and ($i>=count($this->scripts))) {
break;
}
// sleep, this duration depends on your script execution time, the longer execution time, the longer sleep time
sleep($this->sleep_time);
// check what is done
foreach ($this->running as $key => $val) {
if (!$val->isRunning() or $val->isOverExecuted()) {
if (!$val->isRunning()) echo "<span style='color: green;'>Done: ".$val->script."</span><br />";
else echo "<span style='color: red;'>Killed: ".$val->script."</span><br />";
proc_close($val->resource);
unset($this->running[$key]);
$this->processesRunning--;
ob_flush();
flush();
}
}
}
}
}
?>
Process class:
<?php
class Process {
public $resource;
public $pipes;
public $script;
public $max_execution_time;
public $start_time;
function __construct(&$executable, &$root, $script, $max_execution_time) {
$this->script = $script;
$this->max_execution_time = $max_execution_time;
$descriptorspec = array(
0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w')
);
$this->resource = proc_open($executable." ".$root.$this->script, $descriptorspec, $this->pipes, null, $_ENV);
$this->start_time = mktime();
}
// is still running?
function isRunning() {
$status = proc_get_status($this->resource);
return $status["running"];
}
// long execution time, proccess is going to be killer
function isOverExecuted() {
if ($this->start_time+$this->max_execution_time<mktime()) return true;
else return false;
}
}
?>
Example of using:
<?php
$manager = new Processmanager();
$manager->executable = "C:\\www\\_PHP5_2_10\\php";
$manager->path = "C:\\www\\parallelprocesses\\";
$manager->processes = 3;
$manager->sleep_time = 2;
$manager->addScript("script1.php", 10);
$manager->addScript("script2.php");
$manager->addScript("script3.php");
$manager->addScript("script4.php");
$manager->addScript("script5.php");
$manager->addScript("script6.php");
$manager->exec();
?>
And possible output:
Adding script: script1.php
Adding script: script2.php
Adding script: script3.php
Done: script2.php
Adding script: script4.php
Killed: script1.php
Done: script3.php
Done: script4.php
Adding script: script5.php
Adding script: script6.php
Done: script5.php
Done: script6.php
Interestingly enough, it seems you actually have to store the return value in order for your streams to exist. You can't throw it away.
In other words, this works:
<?php
$proc=proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
prints:
foo
but this doesn't work:
<?php
proc_open("echo foo",
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
print stream_get_contents($pipes[1]);
?>
outputs:
Warning: stream_get_contents(): <n> is not a valid stream resource in Command line code on line 1
The only difference is that in the second case we don't save the output of proc_open to a variable.
If you are, like me, tired of the buggy way proc_open handles streams and exit codes; this example demonstrate the power of pcntl, posix and some simple output redirection:
<?php
$outpipe = '/tmp/outpipe';
$inpipe = '/tmp/inpipe';
posix_mkfifo($inpipe, 0600);
posix_mkfifo($outpipe, 0600);
$pid = pcntl_fork();
//parent
if($pid) {
$in = fopen($inpipe, 'w');
fwrite($in, "A message for the inpipe reader\n");
fclose($in);
$out = fopen($outpipe, 'r');
while(!feof($out)) {
echo "From out pipe: " . fgets($out) . PHP_EOL;
}
fclose($out);
pcntl_waitpid($pid, $status);
if(pcntl_wifexited($status)) {
echo "Reliable exit code: " . pcntl_wexitstatus($status) . PHP_EOL;
}
unlink($outpipe);
unlink($inpipe);
}
//child
else {
//parent
if($pid = pcntl_fork()) {
pcntl_exec('/bin/sh', array('-c', "printf 'A message for the outpipe reader' > $outpipe 2>&1 && exit 12"));
}
//child
else {
pcntl_exec('/bin/sh', array('-c', "printf 'From in pipe: '; cat $inpipe"));
}
}
?>
Output:
From in pipe: A message for the inpipe reader
From out pipe: A message for the outpipe reader
Reliable exit code: 12
The call works as should. No bugs.
But. In most cases you won't able to work with pipes in blocking mode.
When your output pipe (process' input one, $pipes[0]) is blocking, there is a case, when you and the process are blocked on output.
When your input pipe (process' output one, $pipes[1]) is blocking, there is a case, when you and the process both are blocked on own input.
So you should switch pipes into NONBLOCKING mode (stream_set_blocking).
Then, there is a case, when you're not able to read anything (fread($pipes[1],...) == "") either write (fwrite($pipes[0],...) == 0). In this case, you better check the process is alive (proc_get_status) and if it still is - wait for some time (stream_select). The situation is truly asynchronous, the process may be busy working, processing your data.
Using shell effectively makes not possible to know whether the command is exists - proc_open always returns valid resource. You may even write some data into it (into shell, actually). But eventually it will terminate, so check the process status regularly.
I would advice not using mkfifo-pipes, because filesystem fifo-pipe (mkfifo) blocks open/fopen call (!!!) until somebody opens other side (unix-related behavior). In case the pipe is opened not by shell and the command is crashed or is not exists you will be blocked forever.
$cmd can actually be multiple commands by separating each command with a newline. However, due to this it is not possible to split up one very long command over multiple lines, even when using "\\\n" syntax.
Please note that if you plan to spawn multiple processes you have to save all the results in different variables (in an array for example). If you for example would call $proc = proc_open..... multiple times the script will block after the second time until the child process exits (proc_close is called implicitly).
This is a example of how run a command using as output the TTY, just like crontab -e or git commit does.
<?php
$descriptors = array(
array('file', '/dev/tty', 'r'),
array('file', '/dev/tty', 'w'),
array('file', '/dev/tty', 'w')
);
$process = proc_open('vim', $descriptors, $pipes);
pipe communications may break brains off. i want to share some stuff to avoid such result.
for proper control of the communications through the "in" and "out" pipes of the opened sub-process, remember to set both of them into non-blocking mode and especially notice that fwrite may return (int)0 but it's not an error, just process might not except input at that moment.
so, let us consider an example of decoding gz-encoded file by using funzip as sub-process: (this is not the final version, just to show important things)
<?php
// make gz file
$fd=fopen("/tmp/testPipe", "w");
for($i=0;$i<100000;$i++)
fwrite($fd, md5($i)."\n");
fclose($fd);
if(is_file("/tmp/testPipe.gz"))
unlink("/tmp/testPipe.gz");
system("gzip /tmp/testPipe");
// open process
$pipesDescr=array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("file", "/tmp/testPipe.log", "a"),
);
$process=proc_open("zcat", $pipesDescr, $pipes);
if(!is_resource($process)) throw new Exception("popen error");
// set both pipes non-blocking
stream_set_blocking($pipes[0], 0);
stream_set_blocking($pipes[1], 0);
////////////////////////////////////////////////////////////////////
$text="";
$fd=fopen("/tmp/testPipe.gz", "r");
while(!feof($fd))
{
$str=fread($fd, 16384*4);
$try=3;
while($str)
{
$len=fwrite($pipes[0], $str);
while($s=fread($pipes[1], 16384*4))
$text.=$s;
if(!$len)
{
// if yo remove this paused retries, process may fail
usleep(200000);
$try--;
if(!$try)
throw new Exception("fwrite error");
}
$str=substr($str, $len);
}
echo strlen($text)."\n";
}
fclose($fd);
fclose($pipes[0]);
// reading the rest of output stream
stream_set_blocking($pipes[1], 1);
while(!feof($pipes[1]))
{
$s=fread($pipes[1], 16384);
$text.=$s;
}
echo strlen($text)." / 3 300 000\n";
?>
Note that the usage of "bypass_shell" in Windows allows you to pass a command of length around ~32767 characters. If you do not use it, your limit is around ~8191 characters only.
See https://support.microsoft.com/en-us/kb/830473.
If you want to use proc_open() function with socket streams, you can open connection with help of fsockopen() function and then just put handlers into array of io descriptors:
<?php
$fh = fsockopen($address, $port);
$descriptors = [
$fh, // stdin
$fh, // stdout
$fh, // stderr
];
$proc = proc_open($cmd, $descriptors, $pipes);
For those who are finding that using the $cwd and $env options cause proc_open to fail (windows). You will need to pass all other server environment variables;
$descriptorSpec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
);
proc_open(
"C:\\Windows\\System32\\PING.exe localhost,
$descriptorSpec ,
$pipes,
"C:\\Windows\\System32",
array($_SERVER)
);
If you have a CLI script that prompts you for a password via STDIN, and you need to run it from PHP, proc_open() can get you there. It's better than doing "echo $password | command.sh", because then your password will be visible in the process list to any user who runs "ps". Alternately you could print the password to a file and use cat: "cat passwordfile.txt | command.sh", but then you've got to manage that file in a secure manner.
If your command will always prompt you for responses in a specific order, then proc_open() is quite simple to use and you don't really have to worry about blocking & non-blocking streams. For instance, to run the "passwd" command:
<?php
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("pipe", "w"),
2 => array("pipe", "w")
);
$process = proc_open(
'/usr/bin/passwd ' . escapeshellarg($username),
$descriptorspec,
$pipes
);
// It wil prompt for existing password, then new password twice.
// You don't need to escapeshellarg() these, but you should whitelist
// them to guard against control characters, perhaps by using ctype_print()
fwrite($pipes[0], "$oldpassword\n$newpassword\n$newpassword\n");
// Read the responses if you want to look at them
$stdout = fread($pipes[1], 1024);
$stderr = fread($pipes[2], 1024);
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$exit_status = proc_close($process);
// It returns 0 on successful password change
$success = ($exit_status === 0);
?>
If you are working on Windows and try to proc_open an executable that contains spaces in its path, you will get into trouble.
But there's a workaround which works quite well. I have found it here: http://stackoverflow.com/a/4410389/1119601
For example, if you want to execute "C:\Program Files\nodejs\node.exe", you will get the error that the command could not be found.
Try this:
<?php
$cmd = 'C:\\Program Files\\nodejs\\node.exe';
if (strtolower(substr(PHP_OS,0,3)) === 'win') {
$cmd = sprintf('cd %s && %s', escapeshellarg(dirname($cmd)), basename($cmd));
}
?>
This script will tail a file using tail -F to follow scripts that are rotated.
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will read from
1 => array("pipe", "w"), // stdout is a pipe that the child will write to
2 => array("pipe", "w") // stderr is a pipe that stdout will to write to
);
$filename = '/var/log/nginx/nginx-access.log';
if( !file_exists( $filename ) ) {
file_put_contents($filename, '');
}
$process = proc_open('tail -F /var/log/nginx/stats.bluebillywig.com-access.log', $descriptorspec, $pipes);
if (is_resource($process)) {
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be sent to $pipes[2]
// Closing $pipes[0] because we don't need it
fclose( $pipes[0] );
// stderr should not block, because that blocks the tail process
stream_set_blocking($pipes[2], 0);
$count=0;
$stream = $pipes[1];
while ( ($buf = fgets($stream,4096)) ) {
print_r($buf);
// Read stderr to see if anything goes wrong
$stderr = fread($pipes[2], 4096);
if( !empty( $stderr ) ) {
print( 'log: ' . $stderr );
}
}
fclose($pipes[1]);
fclose($pipes[2]);
// It is important that you close any pipes before calling
// proc_close in order to avoid a deadlock
proc_close($process);
}
?>
I'm not sure when the "blocking_pipes (windows only)" option was added to PHP, but users of this function should be fully aware that there is no such thing as a non-blocking pipe in PHP on Windows and that the "blocking_pipes" option does NOT function like you might expect. Passing "blocking_pipes" => false does NOT mean non-blocking pipes.
PHP uses anonymous pipes to start processes on Windows. The Windows CreatePipe() function does not directly support overlapped I/O (aka asynchronous), which is typically how async/non-blocking I/O happens on Windows. SetNamedPipeHandleState() has an option called PIPE_NOWAIT but Microsoft has long discouraged the use of that option. PHP does not use PIPE_NOWAIT anywhere in the source code tree. PHP FastCGI startup code is the only place within the PHP source code that uses overlapped I/O (and also the only place that calls SetNamedPipeHandleState() with PIPE_WAIT). Further, stream_set_blocking() on Windows is only implemented for sockets - not file handles or pipes. That is, calling stream_set_blocking() on pipe handles returned by proc_open() will actually do nothing on Windows. We can derive from these facts that PHP does not have a non-blocking implementation for pipes on Windows and will therefore block/deadlock when using proc_open().
PHP's pipe read implementation on Windows uses PeekNamedPipe() by polling on the pipe until there is some data available to read OR until ~32 seconds (3200000 * 10 microseconds of sleep) have passed before giving up, whichever comes first. The "blocking_pipes" option, when set to true, changes that behavior to wait indefinitely (i.e. always block) until there is data on the pipe. It's better to view the "blocking_pipes" option as a "possibly 32 second busy wait" timeout (false - the default value) vs. no timeout (true). In either case, the boolean value for this option effectively blocks...it just happens to block a lot longer when set to true.
The undocumented string "socket" descriptor type can be passed to proc_open() and PHP will start a temporary TCP/IP server and generate a pre-connected TCP/IP socket pair for the pipe and pass one socket to the target process and return the other as the associated pipe. However, passing a socket handle for stdout/stderr on Windows causes the last chunk(s) of output to occasionally get lost and not be delivered to the receiving end. This is actually a known bug in Windows itself and Microsoft's response at one point was that CreateProcess() only officially supports anonymous pipes and file handles for the standard handles (i.e. not named pipes or socket handles) and that other handle types will produce "undefined behavior." For sockets, it will "sometimes work fine and sometimes truncate the output." The "socket" descriptor type also introduces a race condition that is probably a security vulnerability in proc_open() where another process can successfully connect to the server side BEFORE the original process connects to the socket to create the socket pair. This allows a rogue application to send malformed data to a process, which could trigger anything from privilege escalation to SQL injection depending on what the application does with the information on stdout/stderr.
To get true non-blocking I/O in PHP for Windows for standard process handles (i.e. stdin, stdout, stderr) without obscure bugs cropping up, the only currently working option is to use an intermediary process that uses TCP/IP blocking sockets to route data to blocking standard handles via multithreading (i.e. start three threads to route data between the TCP/IP socket and the standard HANDLE and use a temporary secret to prevent race conditions when establishing the TCP/IP socket handles). For those who lost count: That's one extra process, up to four extra threads, and up to four TCP/IP sockets just to get functionally correct non-blocking I/O for proc_open() on Windows. If you vomited a little bit at that idea/concept, well, people actually do this! Feel free to vomit some more.
This is not really a bug but more of an unexpected gotcha. If you pass in an array for $env and include a modified PATH, that path does not take effect in PHP itself when starting the process. So if you are trying to start an executable in the modified PATH by using just the executable name, PHP and the OS won't find it and therefore will fail to start the process.
The fix is to let PHP know about the modified PATH by calling putenv("PATH=" . $newpath) with the new path string so that the call to proc_open() will correctly locate the executable and successfully run it.
Cross function solutions for execute command using PHP-
function php_exec( $cmd ){
if( function_exists('exec') ){
$output = array();
$return_var = 0;
exec($cmd, $output, $return_var);
return implode( " ", array_values($output) );
}else if( function_exists('shell_exec') ){
return shell_exec($cmd);
}else if( function_exists('system') ){
$return_var = 0;
return system($cmd, $return_var);
}else if( function_exists('passthru') ){
$return_var = 0;
ob_start();
passthru($cmd, $return_var);
$output = ob_get_contents();
ob_end_clean(); //Use this instead of ob_flush()
return $output;
}else if( function_exists('proc_open') ){
$proc=proc_open($cmd,
array(
array("pipe","r"),
array("pipe","w"),
array("pipe","w")
),
$pipes);
return stream_get_contents($pipes[1]);
}else{
return "@PHP_COMMAND_NOT_SUPPORT";
}
}