Author: roscoe

PeerLevel 2009 – 2010 Uptime Statistics

October marks the year end mark of our uptime statistics and I am proud of our 2009 – 2010 statistics. The uptime statistics below are based on single minute checks (the statistics would be even better with the usual 5 or even 15 minute check intervals that other web hosts use).

Shared Servers
Average uptime: 99.99%
Total average downtime: 32m 12s

We use an internal monitoring system for our VPS nodes and unfortunately do not have public reports for these systems, however I will still place these statistics below.

Virtual Private Servers (Hardware Nodes)
Average uptime: 99.99%
Total average downtime: 9m 37s

All outages are recorded – both planned and unplanned. While it does not reflect every single system’s uptime, it does indicate the overall stability average. It’s what you can expect when you see uptime such as 00:11:56 up 155 days, 23:57 on many systems, coupled with people who know what they are doing.

If you need our public report links, then please let me know. I would like to thank each and every client for their support and I look forward to next year’s statistics.

Linux CLI Entertainment

If you are reading this and you are a Linux Command Line Interface (CLI) user, then you may be interested in occasionally beautifying your CLI output with the use of various (often comical) ASCII artwork using the programs cowthink or cowsay, such as below:

$ ping -c 5 | cowthink -W 60
( PING ( 56(84) bytes of data. 64 )
( bytes from                 )
( ( icmp_seq=1 ttl=59 time=30.6 ms 64 bytes   )
( from (     )
( icmp_seq=2 ttl=59 time=30.1 ms 64 bytes from                )
( (          )
( icmp_seq=3 ttl=59 time=31.0 ms 64 bytes from                )
( (          )
( icmp_seq=4 ttl=59 time=30.3 ms 64 bytes from                )
( (          )
( icmp_seq=5 ttl=59 time=30.9 ms                              )
(                                                             )
( --- ping statistics --- 5 packets transmitted, )
( 5 received, 0% packet loss, time 4005ms rtt                 )
( min/avg/max/mdev = 30.167/30.631/31.071/0.401 ms            )
        o   ^__^
         o  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

$ ping -c 5 | cowsay -W 60
/ PING ( 56(84) bytes of data. 64 \
| bytes from                 |
| ( icmp_seq=1 ttl=59 time=30.1 ms 64 bytes   |
| from (     |
| icmp_seq=2 ttl=59 time=30.4 ms 64 bytes from                |
| (          |
| icmp_seq=3 ttl=59 time=31.3 ms 64 bytes from                |
| (          |
| icmp_seq=4 ttl=59 time=31.1 ms 64 bytes from                |
| (          |
| icmp_seq=5 ttl=59 time=31.1 ms                              |
|                                                             |
| --- ping statistics --- 5 packets transmitted, |
| 5 received, 0% packet loss, time 4004ms rtt                 |
\ min/avg/max/mdev = 30.170/30.839/31.348/0.458 ms            /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Or one of my favourites:

$ uptime | cowthink -f tux
(  06:10:29 up 1 day, 1:43, 3 users, load )
( average: 0.60, 0.41, 0.23               )
       |o_o |
       |:_/ |
      //   \ \
     (|     | )
    /'\_   _/`\

Want to know what sort of cows or ASCII artwork is available to your install? You need only use the command:

$ ls /usr/share/cowsay/cows

Resellers and Their Stigma

The views and acceptance of reseller web hosting providers seems to differ greatly when it is compared to local (South Africa) and international markets. The general perception amongst South African consumers is that resellers accounts are somehow inferior or not to be advised – something that is mostly in deep contrast with international consumers. International consumers understand that a reseller who is on top of things with their customer support and have partnered with a professional web hosting company can offer good service, often with value added services.

Just to describe exactly what a reseller is, a reseller is basically an individual or company that sells a service with a source that they do not provide themselves, i.e. a middleman if you will. In web hosting, they usually have their own plan breakdowns, as well as billing system and are usually required to provide basic support to their clients.

I am often amazed at how vehemently opposed local customers are of resellers, just recently I read a post on a forum that one should not purchase web hosting from a company that does not have their own data centre. This is a bordering idiotic statement, simply because it costs tens or even hundreds of millions of Rands to just to build and develop a datacentre, never mind infrastructure, staff, marketing, licencing or networking equipment / servers.

There are extremely few web hosting providers that own their own datacentres and locally I can only think of a few, most of which don’t provide shared web hosting. Most of the largest and fastest growing shared specific web hosting companies in the world do not have their own datacentres.

We rent and own servers from numerous data centres and do not own any such facilities, which are all best of class and I doubt could be run any more efficiently than they already are. I cannot fathom why that would put us at a disadvantage to a datacentre which does actually provide shared web hosting accounts (which again are very rare to begin with).

I would be interested to know how these individuals would conduct a comparison between two such datacentres that provide web hosting – would their floorspace make a difference? What if they own multiple datacentres versus a provider with a single facility?

I wonder what these same people would answer if posed questions such as “Do you only eat at restaurants that own their food supply chains (the farms, etc), as well as building?” or “Do you only use flight services from providers that own their own airports?”. When it is put into such everyday perspectives, the absurdity of these claims can be easier understood.

Please be careful when reading advice from people on forums or social media – it should often be taken with a pinch of salt.

Dynamic and Static Binding

I enjoy understanding not only programming language syntactics, but also the way the programming language works internally.

Let’s say I have some Java classes and method such as below:

class Animal {
public void walk() {
// Do something.

class Dog extends Animal {
public void walk() {
// Do something slightly different.

void action(Animal a) {

Dog d = new Dog();

Ok, now let’s do something with d:


Did you see the amazing behind the scenes process going on there? It isn’t too obvious and I will spell it out for you below:

The Dog class as its own walk method which (more than likely) is different from the Animal class’ walk method. The action method accepts the Animal type, so it does not know for sure if it was an upcasted Dog specifically or just an Animal that was passed to it. However when you run the application, the correct walk is achieved! Why does this happen?

The reason this happens is because of Dynamic Binding. Dynamic Binding occurs when the compiler can’t resolve the calls and this results in the bindings happening during runtime. Yes you guessed it, binding is to do with inheritence. The method call binds are actually based off the actual object type and not the declared type.

Static Binding is to do with bindings that can be determined during compilation, for example member variables.

The Flood of Web Developers

There has been a steady increase in web developers over the years and it saddens me to see some of the quality of work and abuse of technologies employed as a drop – in replacement for clear lack of ability. Take for example CMS based platforms – these are meant to be used to enable websites to be easily updated and manage a decent volume of articles in a logical manner.

What I see more and more often are simple, static websites that are built using these technologies on the basis that:

  • The owner can update the website easily.
  • Articles and data are stored logically.

Fair enough but these are the kinds of websites that will VERY rarely be updated and are extremely simple. I am talking about 4 or 5 page websites with simple “About Us” and similar descriptive pages that are using these CMS platforms.

I suspect that one reason why these CMS platforms are chosen is due to the apparent absolute lack of knowledge of even simple HTML and CSS design. It is far easier to just download a template, apply it to a CMS and edit the content in an easy to use web – based editor then create an efficient, static website.

Considering the website is easy for the owner to update, there is almost zero possibility of the website owner ever updating their website script, which begs the question if the web developers who employ this simple means of website creation understand the risks involved. What you end up with is a web developer who spends a few minutes to install a CMS script, followed by applying a (often free) template. Finish it off with a few minor adjustments and that is what they would consider a job well done.

What you really get is a 4 page website that uses 15 megabytes of storage, with a database holding 15 tables and 5 records.

Just How Well Does a VPS Perform?

Many web hosting shoppers are weary of VPS (Virtual Private Server) solutions that are touted as offering the performance of a dedicated server but as a software instance. High end Virtual Private Servers can usually outperform an entry level dedicated server but everything is relative.

I decided to do some benchmarks today to identify the performance indicators and compare it to one of my traditional desktop systems. Three benchmark runs were done, with the average results chosen of the three benchmarks.

The results below were produced by an AMD Athlon X2 5200+ with 2 Gigabytes of memory and a run of the mill SATA 7200 RPM hard disk. It had Ubuntu 32 Bit 10.04 installed:

BYTE UNIX Benchmarks (Version 4.1-wht.2)
System — Linux 2.6.32-14-generic #20-Ubuntu SMP Sat Feb 20 05:38:50 UTC 2010 i686 GNU/Linux
/dev/sda7             77813696  63782568  10078340  87% /

Start Benchmark Run: Tue Jun 15 13:39:58 CAT 2010
13:39:58 up 15:43,  2 users,  load average: 0.02, 0.06, 0.01

End Benchmark Run: Tue Jun 15 13:50:31 CAT 2010
13:50:31 up 15:53,  2 users,  load average: 18.72, 7.08, 2.99

TEST                                        BASELINE     RESULT      INDEX

Dhrystone 2 using register variables        376783.7 11393094.3      302.4
Double-Precision Whetstone                      83.1     1269.0      152.7
Execl Throughput                               188.3     7209.1      382.9
File Copy 1024 bufsize 2000 maxblocks         2672.0   131291.0      491.4
File Copy 256 bufsize 500 maxblocks           1077.0    46040.0      427.5
File Read 4096 bufsize 8000 maxblocks        15382.0   772334.0      502.1
Pipe-based Context Switching                 15448.6   411501.4      266.4
Pipe Throughput                             111814.6  1580693.0      141.4
Process Creation                               569.3    20060.0      352.4
Shell Scripts (8 concurrent)                    44.8      900.8      201.1
System Call Overhead                        114433.5  2763156.4      241.5
FINAL SCORE                                                          289.6

The results below are from our Silver VPS line, featuring Virtuozzo virtualisation. The VM was installed with CentOS 5 64 bit with cPanel and sports 512 Megabytes of memory.

BYTE UNIX Benchmarks (Version 4.1-wht.2)
System — Linux 2.6.18-028stab067.4 #1 SMP Thu Jan 14 17:06:11 MSK 2010 i686 i686 i386 GNU/Linux
/dev/vzfs             31457280   9656548  21800732  31% /

Start Benchmark Run: Tue Jun 15 09:58:08 EDT 2010
09:58:08 up 23 days, 10:43,  2 users,  load average: 0.02, 0.03, 0.00

End Benchmark Run: Tue Jun 15 10:08:13 EDT 2010
10:08:13 up 23 days, 10:53,  2 users,  load average: 19.44, 7.53, 3.12

TEST                                        BASELINE     RESULT      INDEX

Dhrystone 2 using register variables        376783.7 21662200.7      574.9
Double-Precision Whetstone                      83.1     1217.9      146.6
Execl Throughput                               188.3     7482.7      397.4
File Copy 1024 bufsize 2000 maxblocks         2672.0   245885.0      920.2
File Copy 256 bufsize 500 maxblocks           1077.0    68374.0      634.9
File Read 4096 bufsize 8000 maxblocks        15382.0  2397244.0     1558.5
Pipe-based Context Switching                 15448.6   256611.7      166.1
Pipe Throughput                             111814.6  3229997.3      288.9
Process Creation                               569.3    25387.6      445.9
Shell Scripts (8 concurrent)                    44.8     2504.2      559.0
System Call Overhead                        114433.5  3037263.6      265.4
FINAL SCORE                                                          431.6

You may be wondering why the VPS performed so well. The result can be explained because of the Equal – Share CPU setup and SAS 15 000 RPM RAID-10 disk configuration. Each VPS node consists of at least 16 CPU cores of 2.2 GHz and you can attain very high speeds with these configured for Equal Share. When you couple that with the I/O performance of the disk configuration, you can see why there is a huge difference in the score, in spite of available memory.

Want to try benchmarking your machine? Feel free to download the modified BYTE Magazine benchmark here

To install:

# tar -xzvf unixbench-*-wht.tar.gz
# cd unixbench-*-wht-2 ; make
# ./Run

Let me know if there are any specific benchmarks you want to see. The next batch of benchmarks will comprise of Xen based virtual machines, as well as more multi – threading intensive tests.