Educating the world

Our blog has over 10,000 readers a month

Small comparison of PHP Frameworks

October 11th, 2011

I’m working on a PHP web site and I want to add a user authentication component. In the past I’ve written hundreds of these things and couldn’t be bother to write the same code all over again. You know the story: create user table in the database, add a load of database table management web pages to create, modify and delete users, then add the concept of a logged-in user to each page. It’s a lot of work. Those of us in the “know” call this boiler plate code and it is basically a laborious distraction away from writing our application.

I thought I’d check out some of the PHP Framework tools available to see what there is out there that could handle this for me. This is by no means a comprehensive list. I wanted to briefly look at a couple and choose; I didn’t want to spend all day trying to pick holes in each one. I figured that for such a small requirement it wouldn’t take long to implement the user authentication. If one of my short list didn’t shape up then I could just swap it out.

My only goal is to find something that will handle users for me. The framework that gets me closest to this wins.

There is a handy web site called PHP Frameworks that lists the majority of the frameworks. It has a really handy table that describes which features are supported by which frameworks. The features it tracks are: PHP4, PHP5, MVC, Multiple DB’s, ORM, DB Objects, Templates, Caching, Validation, Ajax, Authentication Module, Modules and EDP.

Akelos
The Akelos framework describes itself as a Ruby on Rails port for PHP. I’ve used Grails before. Although they can save you a lot of time, you lose the time benefits in trying to learn all the commands. One just seems to spend all of ones life trying to figure out why it didn’t work and hunting for bizarre error messages in forums.
There is a “Creating a blog in 20 minutes using the Akelos PHP Framework” screen cast. A Spanish guy with an unpronounceable name whizzes through the tutorial. It’s quite comprehensive but you really need to run through it with him feeling your way along with the application and trying different things. There are many commands to create controllers, models and views, but if Grails is anything to go by, there are a host of other things you can create which will slowly suck the life out of you! There just seems to be loads to do and read before you can do anything.
Verdict: Once bitten twice shy. Anything which uses convention over configuration requires you to learn all the conventions which are often not obvious or conventional. I don’t really want to spend the rest of my life doing this. For what I need this is too much effort and work.

Dingo
Dingo on the other hand has a really simple website. This project works less like a framework which is all encompassing and more like a set of classes that can be slotted in. You need only learn and use the helper classes that suit you. Dingo is still an MVC framework but it is so unobtrusive you should be able to fit it in to an existing development. If you need a nice simple example of how an MVC system works then this is a really good starting point. Documentation is clear, the examples are straight forward and simple. The project probably doesn’t have a million options to fulfil everyone’s requirements but if you want to use the more general cases then you should be more than happy. There was several modules to help handle XML, pagination, Capcha, Sessions and user authentication. Reading through the project’s Twitter feed suggests that the project is struggling to find developers. It would be a real shame if development stopped on this project.
Verdict: I definitely liked this. It was what the Spring Framework is to JBoss. The project is still in it’s infancy (version 0.7.1 at the time of writing this) and there also didn’t seem to be much in the way of example projects, but it kind of doesn’t matter because what there is there should be enough to get anyone up and running.

CakePHP
Verdict:This is another Ruby on Rails port. Comprehensive documentation although the webcasts were a bit slow and tended to be people talking at seminars rather than tutorials.

Yii
The best introduction you could have is to watch the 4 tutorial presentation screencasts. Each one is 5 to 7 minutes and takes you from downloading the application to writing a Hello World program where the Hello World text is stored: in the page, in a template and then in a database. This application is also an MVC but unlike Akelos many of the steps have helper web pages to guide you through, rather than expecting you to remember all the options. The user interface helps with the setting up and configuration of Models, Views and Controllers. The Class Reference documentation looks generated and as a result it’s comprehensive and easy to navigate with a similar feel to JavaDocs.
Verdict: After watching all the videos I am raring to go. From the outside it has probably the same amount of features as Akelos or CakePHP but my fears have been allayed by a very good support web site and a set of accompanying books.

Conclusion
I did like Dingo but I’ve decided to go with Yii because it looks like I can just use it for a small part of my site and yet there is plenty of scope for it to grow if needs be. My next blog post will probably be along the lines of setting up a user authentication system in Yii. So watch this space.

Perceived value

October 4th, 2011

I’m very black and white when it comes to buying things and doing personal shopping. I almost take it to the extreme: a £20 t-shirt should be twice as good as a £10 t-shirt with regards to build quality. If it is not twice as good I’m paying for the brand. The trouble with the brand is that it is only perceived value. Perceived value is not the same as actual value and in reality bares no relationship to build quality.

I had a girlfriend a while back who spent an awful lot of money on shoes, bags and clothes. It was a constant bone of contention that a Gucci hand bag that cost £500 cost that much because it was made from quality components. Whereas a similar looking bag made by an independent was cheaper because it was made with sub-standard materials. “You’re paying for the quality” she kept saying. Amazing, such is the power of marketing!

While there may be other things that effect the price, such as after sales support, the majority of the cost comes from the perception that if it is expensive then only a select few will be able to afford them and so with that comes exclusivity. There is no way that a pair of Emporio Armani jeans are 5 times as good as a pair of Levi jeans even though their price tag is five times more. So what am I paying for? It costs a lot of money to advertise in exclusive magazines and even more to advertise on television. I expect it costs quite a bit to push their wears on celerities and sponsorship. But none of these expenses have anything to do with the actual product. I don’t expect these companies are pouring money into R & D to design the latest bags; market research isn’t that expensive and because it’s fashion, what ever they say the latest fashion is - then that’s what it is. So no money spent there either.

Maybe I’m taking a simplistic view on this, but is that just it. Is everyone being duped in to paying twice the price for nothing?

Is it really a bargain? Do you really need it?

September 28th, 2011

Edgar Watson Howe:

One of the most difficult things in the world is to convince a woman that even a bargain costs money.

I ended up with a house full of things I didn’t need or want that my girlfriend had bought because they were cheap. For some reason I couldn’t convince her that just because they had knocked 50% off the price didn’t mean that it was a bargain. It only became a bargain if we needed it and more often than not it was just used as ammo for her argument about buying a bigger house!

Getting up and running with Sentinel RMS, C#, .NET, Windows 7 and avoiding BadImageFormatException

September 23rd, 2011

SafeNet has an application called Sentinel RMS. Sentinel RMS is a suite of applications that help one company license out its software.

One of my current projects requires me to use Sentinel RMS within a Microsoft .NET framework using C#.

I had a couple of problems setting up the example solutions SafeNet gave out, so I thought I’d document them here. There’s probably not that many people who will use Sentinel RMS but I did run into a problem that might well be generally useful:

API call fail with message:System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)
at SentinelRMSCore.Interface.LSAPI.VLSinitialize()
at SentinelRMSCore.RMSDKCore.VLSinitialize():VLSinitialize
API call fail with error-code:501

While the manual for Sentinel RMS was pretty good it was all about the API’s and how to call them. The documentation was missing a How to get started section that would describe how to set up Visual Studio. There is a brief few lines saying which version of Visual Studio are supported but that’s about it.

  1. Download SDK RMS 8.5.0 Windows 32 and 64-bit GA release (you can get this from your support representative)
  2. Download the Sentinel RMS 8.4.1 .Net Interface (you can get this from your support representative)
  3. Unpack and install the SDK.
  4. Add the following DLL folder to your PATH:

    C:\Program Files (x86)\SafeNet Sentinel\Sentinel RMS Development Kit\8.5\English\MsvcDev\Lib\MSVS2008\Win32\DLL\

    This will allow the C# assemblies to find the required DLLs.
  5. Unpack the .Net Interface.
  6. Navigate to RMS 8.4.1 .NET Interface\Examples
  7. Create a folder called Libraries
  8. Copy the contents of RMS 8.4.1 .NET Interface\Library into the RMS 8.4.1 .NET Interface\Examples\Libraries folder
  9. Navigate to RMS 8.4.1 .NET Interface\Examples\VisualC#\AddRemoveLicense
  10. Double click the solution file (.sln) to launch Visual Studio. The documentation says that only Microsoft Visual Studio 2003/2005/2008 is supported so make sure its one of those.
  11. When the solution loads in, right-click on the project name and select Properties.
  12. Under the Application tab change the Target Framework to 3.5.
  13. Under the Build tab change the Platform Target to x86.
  14. Then save, clean solution, rebuild solution and run.

I found that if you didn’t add the DLL folder to the PATH then I got the following error message, and in order to fix it I had to manually copy the lsapiw32.dll into the Release/Debug folder. There didn’t seem to be anywhere in visual studio that let you add extra DLL search folders because it is rubbish.

API call fail with message:System.DllNotFoundException: Unable to load DLL ‘lsapiw32.dll’: The specified module could not be found. (Exception from HRESULT: 0x8007007E)
at SentinelRMSCore.Interface.LSAPI.VLSinitialize()
at SentinelRMSCore.RMSDKCore.VLSinitialize():VLSinitialize
API call fail with error-code:501

The other problem, which is probably more generally useful, was that of the System.BadImageFormatException exception. This occurs when the Common Language Runtime (CLR) tries to load an assembly that contains unmanaged code built targeting a different platform (thanks Dave).

In my case with Sentinal, the lsapiw32.dll was compiled with the platform x86 for the 32-bit version of the DLL. Visual Studio defaults to building for a target platform of Any and this discrepancy is what causes the error. Equally if I had chosen to fill the Libraries folder with DLLs from the Library(x64) folder (and the corresponding C:\Program Files (x86)\SafeNet Sentinel\Sentinel RMS Development Kit\8.5\English\MsvcDev\Lib\MSVS2008\Win64\DLL) instead then I would have had exactly the same problem.

Connection balancing across NLB using IIS and MaxKeepAliveRequests

September 21st, 2011

I have been doing a lot of work lately with Network Load Balancer (NLB) which is Microsoft’s clustering solution and Microsoft Internet Information Services (IIS).

We have written a video transcoding application which sits under a RESTful front end provided by IIS. The transcoding application is CPU bound, that is, the CPU is the first place to bottleneck and prevent the computer from doing more work. The heavy CPU is caused by video transcoding. This involves reading a unit of video from a video server, converting it to another format and squirting it out to a client. Transcoding video is a pipeline process which means there are huge performance advantages in processing a series of consecutive video units in a read-ahead fashion.

A normal web server could handle 2 or 3 orders of magnitude more requests than ours. As a result we found that it was more difficult to load balance across an NLB cluster because the number of new incoming connections was relatively small.

The application suite has been designed to be stateless in order to allow it to fit into a cluster architecture. We want to be able to scale outward more easily so in order to support more clients we can just add more boxes.

Our experiments have shown that 1 PC can support about 10 simultaneous clients before the system’s performance degrades to unusable levels. For each new PC we add to the cluster, we can get another 8-10 clients.

We would like to keep each client talking to the same cluster node for a short period so that we can get the benefit of pipe-lining requests, while at the same time we need to make sure that clients can move between cluster nodes in order to keep the load evenly balanced across the cluster.

There are several configuration options across NLB, IIS and some custom code that needs to be configured in order to build a suitable solution.

Under IIS, HTTP KeepAlive allows a client to connect once, then make as many requests down the connection pipe as it likes before the client closes the connection. The server will hang on to each client until they go away. If KeepAlive is switched off the connection will be closed at the end of each request which may add significant overheads to dealing with clients that a geologically distant. HTTP KeepAlive works on layer 5 of the OSI model.

NLB has a similar option called Affinity. The Affinity can be either sticky or non-sticky (there are other states but for the purposes of this article they can all be condensed into these two). Stickiness ensures that the same client is always directed to the same cluster node. NLB works on layer 4 of the OSI model.

The simplest solution is to switch NLB Affinity to non-sticky and set HTTP KeepAlive to false. Each incoming request that arrives at the cluster will be directed to a choice of machines, make its request, get the data and then tear down everything and start again for the next request. With this set up we will not be able to take any advantage of the pipe-lining efficiency that could be had and as a result the platform will be able to support fewer clients overall.

Each one of these technologies has advantages and disadvantages. The advantage of using stickiness with NLB is that you can ensure that all requests for a client, for the lifetime of the client or that cluster node will be directed to the same place. That will be good for pipe-lining but bad for load balancing. The advantages and disadvantages for HTTP KeepAlive are similar except here you are at the mercy of what the client decides to do.

In experiments we have shown that if one of the nodes in the cluster goes down the NLB will notice and rebalance; diverting incoming traffic to another node in the cluster. The HTTP KeepAlive clients will simply reconnect to the next allocated node in the cluster and stay there for the rest of their lives. This means that when a downed node comes back up, it balances with the rest of the cluster to make sure the request distribution is correct. NLB will not sever existing connections so all the existing clients will stay where they are. Only new incoming connections will be allocated to the newly added cluster node. So what we find is that after a cluster node failure the rest of the nodes take up the slack and end up working extra hard, but when the failed node re-enters the cluster it sits there doing nothing.

If you were dealing with thousands of small requests it would be a different story; it probably wouldn’t matter so much because new clients are coming and going all the time.

What we need is a combination of KeepAlive and not KeepAlive on a non-sticky platform. Apache has a configuration option called MaxKeepAliveRequests. This option severs the connection to the client after this many requests (the default is 100). With this option we can have 100 consecutive requests over the same connection to enjoy the benefits of pipe-lining the requests and yet we are giving the system/platform a chance to balance itself on a regular basis.

IIS has no concept of limiting the number of requests a connection can service, which probably goes some way to explaining why IIS only has 15.73% of the web server market. I posted a question on ServerFault but didn’t get a satisfactory response. The one reply I did get was from some one saying that if my application was truly stateless I needed to switch off KeepAlive altogether and take the penalty for the re-connection. While the application is stateless there are advantages to be had from batching requests together. An answer of it can’t be done or is not supported is, in my opinion not an answer. What they actually mean is that it is not supported yet. In I.T. almost everything *is* possible as long as you know what to do.

IIS7 has a new pipeline module architecture that allows you to inject code into the processing of a request at any one of about 12 different stages. The run line passes through each module at each requested stage in order to modify the request’s response.

When the module is loaded in, it reads the MaxKeepAliveRequests number from the web.config. For each request that comes in the module will remember the remote host, remote port and how many requests have been serviced by that combination. When the request is in its final stage we’ll check to see if the number of serviced requests is bigger than MaxKeepAliveRequests. If it is then we can inject a Connection: close into the response. This will make its way through IIS, safely closing the connection on it’s way out.

Surprisingly there was a great deal of confusion on MSDN documentation, blogs and forums surrounding how to force a close after a request. I found that HttpResponse.Close() can chop the end off the reply, HttpApplication.CompleteRequest() didn’t work because the request’s run line was already inside the EndRequest section of the pipeline. So I went back to the specification and in RFC2616: Section 8 - Connections it talks about injecting Connection: close into the response header so that after the response is sent out the server closes the connection. The closure forces the client to reconnect. I tried this using a telnet client (and not a web browser) and can reveal that it is the server that closes the connection and not the client deciding.

I had thought about using the Session to store the request count but I didn’t think it would help. If a proxy server is talking to your cluster then it may be interleaving requests from several sources with different session identifiers. We are interested in the transport layer, and not the session layer. We must use values from the transport layer to differentiate the clients in order to spread the load.

Simply compile up this C# and add it to your IIS integrated process pipe line.

Code

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Diagnostics;
using System.Collections.Specialized;
using System.IO;
 
namespace WebApplication1
{
 
    public class MaxKeepAliveRequestsModule : IHttpModule
    {
        int maxRequests = 0;
        Dictionary<string, KeepAliveClient> record = new Dictionary<string, KeepAliveClient>();
 
        public MaxKeepAliveRequestsModule()
        {
            Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.con");
        }
 
        public int MaxRequests
        {
            get { return maxRequests; }
            set { maxRequests = value; }
        }
 
        public void Init(HttpApplication context)
        {
            Debug.WriteLine("Debug : creating MaxKeepAliveRequestsModule");
 
            string mrStr = System.Web.Configuration.WebConfigurationManager.AppSettings["MaxKeepAliveRequests"];
            maxRequests = validateMaxKeepAliveRequestsValue(mrStr);
            
            context.EndRequest += new EventHandler(OnEndRequest);
        }
 
        private int validateMaxKeepAliveRequestsValue(string val)
        {
            if (val == null || val.Length == 0)
                throw new ArgumentException("appSettings.MaxKeepAliveRequests is empty");
            int mr = Convert.ToInt32(val);
            if (mr < 1)
                throw new ArgumentException("appSettings.MaxKeepAliveRequests must be greater than zero: " + mr);
            return mr;
        }
 
        public void Dispose()
        {
            Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.Dispose");
        }
 
        public void OnEndRequest(Object source, EventArgs e)
        {
 
            Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.OnEndRequest");
            HttpApplication app = (HttpApplication) source;
            HttpRequest request = app.Context.Request;
            HttpResponse response = app.Context.Response;
 
            // Tried to use socket as the key, but don't seem to back access to it from here
            // Stream k = response.OutputStream;
 
            NameValueCollection serverVariables = request.ServerVariables;
            string k = serverVariables["REMOTE_HOST"] + ":" + serverVariables["REMOTE_PORT"];
 
            if (record.ContainsKey(k))
            {
                KeepAliveClient c = record[k];
                Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.OnEndRequest: hit");
                if (c.Hits > maxRequests)
                {
                    Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.OnEndRequest:max requests reached for " + k + "(" + c.Hits + "), force close connection to client");
 
                    // works, but may chop the end of the response
                    // response.Close();
 
                    // doesn't appear to work
                    // app.CompleteRequest();
 
                    // http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
                    response.Headers["Connection"] = "close";
                    record.Remove(k);
                    return;
                }
                c.touch();
 
            }
            else
            {
                Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.OnEndRequest: miss");
                cleanOldKeepAliveRecords();
                record.Add(k, new KeepAliveClient(k));
            }
 
        }
 
        private void cleanOldKeepAliveRecords()
        {
            foreach (KeepAliveClient cc in record.Values.ToList())
            {
                if (cc.isExpired())
                {
                    Debug.WriteLine("Debug : MaxKeepAliveRequestsModule.cleanOldKeepAliveRecords: key=" + cc.Key);
                    record.Remove(cc.Key);
                }
            }
        }
    }
 
    class KeepAliveClient
    {
        private static TimeSpan TIMEOUT = new TimeSpan(1, 0, 0); // hour
 
        private DateTime now;
        private int hits;
        private string key;
 
        public KeepAliveClient(string key)
        {
            this.key = key;
            now = DateTime.Now;
            hits = 1;
        }
 
        public int Hits
        {
            get { return hits; }
        }
 
        public string Key
        {
            get { return key; }
        }
 
        public void touch()
        {
            hits++;
            now = DateTime.Now;
        }
 
        public bool isExpired()
        {
            return now + TIMEOUT < DateTime.Now;
        }
    }
}

You’ll need to add the configuration option to the web.config

XML

<configuration>
  <appSettings>
    <add key="MaxKeepAliveRequests" value="100"/>
  </appSettings>
</configuration>