Shared Memory Functions
Table of Contents
- shmop_close — Close shared memory block
- shmop_delete — Delete shared memory block
- shmop_open — Create or open shared memory block
- shmop_read — Read data from shared memory block
- shmop_size — Get size of shared memory block
- shmop_write — Write data into shared memory block
Коментарии
What you need to realise is that sysvshm is extremly php oriented in it's ability, it's quite a kludge interfacing other NON PHP utilities with it. For example have you tried using sysvshm to read an shm segment NOT created by php? It's not possible, because sysvshm uses a proprietry format, in essense it can ONLY be used within PHP unless of course you take time to figure out this format.
So basically, the purpose of shmop is to provide a symple interface to shared memory that can be used with OTHER NON php shm creators.
Hope this clears it up.
The idea behind SHMOP is an easy to use shared memory interface,
without any additional headers added to the shared memory segment
or requiring any special special controls to access the shared memory
segment outside of PHP. SHMOP borrows its api from C's api to shm,
which makes it very easy to use, because it treats shared memory, like C, as
a file of sorts. This makes it very easy to use even for novices, due to this
functionality. Most importantly SHMOP uses shm segments to store raw data,
which means you don't need to worry about matching headers, etc... when you are
using C, perl or other programming languages to open/create/read/write shm segments
that were create or are going to be used by PHP. In this it differs from
sysvshm, who's shm interface uses a specialized header, which resides inside
the shared memory segment this adds an unnecessary level of difficulty when
you want to access php shm from external programs.
Also, from my personal tests in Linux 2.2/2.4 and FreeBSD 3.3 SHMOP is about
20% faster then sysvshm, mostly due to fact it does not need to parse the
specialized header and stores the data in raw form.
Windows does support shared memory through memory mapped file. Check the following functions for details:
* CreateFileMapping
* MapViewOfFile
Since there is no mention of the (lack of) need for locking here, I took a look into the shmop.c extensions code. So correct me if I'm wrong, but the shmop.c extension uses memcpy() to copy strings to and from shared memory without any form of locking, and as far as I know, memcpy() is not atomic.
If that's true as I suspect, then these 'easy to use' functions are not so 'easy to use' any more and have to be wrapped in locks (e.g. semaphores, flocks, whatever).
It's not the job of the shmop extension to provide locking, there are many locking schemes avalible, if you need some sort of atomic operations choose a locking scheme that suits you and use it.
I have written a script to highlight the superiority of shared memory storage.
Although it doesn't use the shmop function, the underlying concept is similar.
'/shm_dir/' is a tmpfs directory, which is based on shared memory, that I have mounted on the server.
Below is the result on an Intel Pentium VI 2.8 server:
IO test on 1000 files
IO Result of Regular Directory : 0.079015016555786
IO Result of Shared Memory Directory : 0.047761917114258
IO test on 10000 files
IO Result of Regular Directory : 3.7090260982513
IO Result of Shared Memory Directory : 0.46256303787231
IO test on 40000 files
IO Result of Regular Directory : 117.35703110695 seconds
IO Result of Shared Memory Directory : 2.6221358776093 seconds
The difference is not very apparent nor convincing at 100 files.
But when we step it up a level to 10000 and 40000 files, it becomes pretty obvious that Shared Memory is a better contender.
Script courtesy of http://www.enhost.com
<?php
set_time_limit(0);
// Your regular directory. Make sure it is write enabled
$setting['regular_dir'] = '/home/user/regular_directory/';
// Your shared memory directory.
$setting['shm_dir'] = '/shm_dir/';
// Number of files to read and write
$setting['files'] = 40000;
function IO_Test($mode)
{
$starttime = time()+microtime();
global $setting;
for($i = 0 ; $i< $setting['files'] ;$i++)
{
$filename = $setting[$mode].'test'.$i.'.txt';
$content = "Just a random content";
// Just some error detection
if (!$handle = fopen($filename, 'w+'))
{
echo "Can't open the file ".$filename;
exit;
}
if (fwrite($handle, $content ) === FALSE)
{
echo "Can't write to file : ".$filename;
exit;
}
fclose($handle);
// Read Test
file_get_contents($filename);
}
$endtime = time()+microtime();
$totaltime = ($endtime - $starttime);
return $totaltime;
}
echo '<b>IO test on '.$setting['files']. ' files</b><br>';
echo 'IO Result of <b>Regular</b> Directory : '.IO_Test('regular_dir') .' seconds<br>';
echo 'IO Result of <b>Shared Memory</b> Directory : '.IO_Test('shm_dir') .' seconds<br>';
/* Removal of files to avoid underestimation
#
# Failure to remove files will result in inaccurate benchmark
# as it will result in the IO_Test function not re-creating the existing files
*/
foreach ( glob($setting['regular_dir']."*.txt") as $filename) {
unlink($filename);$cnt ++;
}
foreach ( glob($setting['shm_dir']."*.txt") as $filename) {
unlink($filename);$cnt ++;
}
?>
I wrote a php memcache back in 2003 as a sort of proof of concept
it is use on a few machines for doing heavy page load caching...
it works very well.
Following are some of the core functions I made
<?php
###############################################
#### shared mem functions
/*
for debugging these
use `ipcs` to view current memory
use `ipcrm -m {shmid}` to remove
on some systems use `ipcclean` to clean up unused memory if you
don't want to do it by hand
*/
###############################################
function get_key($fsize, $file){
if(!file_exists(TMPDIR.TMPPRE.$file)){
touch(TMPDIR.TMPPRE.$file);
}
$shmkey = @shmop_open(ftok(TMPDIR.TMPPRE.$file, 'R'), "c", 0644, $fsize);
if(!$shmkey) {
return false;
}else{
return $shmkey;
}//fi
}
function writemem($fdata, $shmkey){
if(MEMCOMPRESS && function_exists('gzcompress')){
$fdata = @gzcompress($fdata, MEMCOMPRESSLVL);
}
$fsize = strlen($fdata);
$shm_bytes_written = shmop_write($shmkey, $fdata, 0);
updatestats($shm_bytes_written, "add");
if($shm_bytes_written != $fsize) {
return false;
}else{
return $shm_bytes_written;
}//fi
}
function readmem($shmkey, $shm_size){
$my_string = @shmop_read($shmkey, 0, $shm_size);
if(MEMCOMPRESS && function_exists('gzuncompress')){
$my_string = @gzuncompress($my_string);
}
if(!$my_string) {
return false;
}else{
return $my_string;
}//fi
}
function deletemem($shmkey){
$size = @shmop_size($shmkey);
if($size > 0){ updatestats($size, "del"); }
if(!@shmop_delete($shmkey)) {
@shmop_close($shmkey);
return false;
}else{
@shmop_close($shmkey);
return true;
}
}
function closemem($shmkey){
if(!shmop_close($shmkey)) {
return false;
}else{
return true;
}
}
function iskey($size, $key){
if($ret = get_key($size, $key)){
return $ret;
}else{
return false;
}
}
################################################
?>
The shmop implementation as described in this help page is acttually merely a ramdisk / tmpfs that exists only within php, and even only on the linux servers. Or am I missing something?
On windows, the very same functionality can easily be achieved by creating such disk.
In fact, on my own server, I do use a tmpfs disk instead of the - as it appears to me - limited shmop feature.
Why not implement a $_SHARED or $_MUTUAL SuperGlobal in which we can create variables at will, and that is shared by all connections?
This would greatly improve performance of many PHP applications, and could save a lot of burden on server memory. Especially if those variables could be classes containing functions.
You could implement that it is up to the programmer to guard atomicity.
Such superglobal would be feasible on windows servers as well.
Despite the fact that reads from and writes to the shared memory are not atomic, reading and writing just ONE byte is always atomic. This can be very useful if your application frequently reads and rarely writes "small" chunks of data (~10-15 bytes). You can avoid using any kind of locks by signing your data by 8-bit checksum (like CRC-8). This is an effective and reliable way to ensure that your data is not corrupted. The redundancy is naturally 8 bits.