
It's down when it's being needed the most. It cannot hold a lot of traffic, for me its the most unreliable source of information. On Wed, Feb 29, 2012 at 5:03 PM, wool row <skunkworks100@gmail.com> wrote:
--
Endless
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- pmaingi@gmail.com Tel: 254 720 244970 No man in the world has more courage than the man who can stop after eating one peanut.

Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift? On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

Why are we assuming the problem is the infrastructure? On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com<javascript:_e({}, 'cvml', 'ndungustephen@gmail.com');>
wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <javascript:_e({}, 'cvml', 'Skunkworks@lists.my.co.ke');> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

I would imagine the problem is not in the infrastructure but in the implementation of the system.Just a quick question before I proceed,have you used the KRA online system?How do you like it so far? Let me explain,you could have the most expensive servers money can buy but if the developer of the system does not factor in concurrency in his code,you will end up with knec.ac.ke or kra.go.ke.Then of course after fixing the code,the correct server configuration and getting enough bandwidth,there is 'Level 8 of the OSI model' a.k.a policies that will ensure things run smoothly even after implementation. Allow me to get very angry here.You see,there must be someone somewhere who sits down and thinks to himself/herself,"I know I am in charge of this website and last year it went belly up.However,that did not affect my life at all!If these people want to check their results,why are they in such a hurry and yet they can go to their schools after a day or two and check the results?This is Kenya bana,wasiniharakishe!". How else would you explain the same problem occurring year in year out?In addition to this,you and I will complain on this list but come Friday we will have moved on to other 'important' issues and forgotten about the incidence until February 2013!We will not even try to give a solution to the relevant parties! Do you see why other countries keep calling us a Third World Country?If we cannot fix a simple website,why are we so concerned with building a WHOLE TECHNOLOGY CITY and 'becoming a major outsourcing hub'? My 2 cents... On 29 February 2012 22:57, Rad! <conradakunga@gmail.com> wrote:
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com>wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Kind Regards, Moses Muya.

@Moses, I am with you on that! I have been trying to get my brother's results. The poor soul spent the entire day yesterday and the night as well refreshing the knec page. The few times the page loaded, the results search page was disabled! He had also sent several text messages to the short code 5052 and 24 hours later, no response! Something is not right somewhere! Why do they get away with this year in year out? To hell with the Konza city yada yada.... I should also be allowed to get angry here. On Thu, Mar 1, 2012 at 12:57 AM, Moses Muya <mouzmuyer@gmail.com> wrote:
I would imagine the problem is not in the infrastructure but in the implementation of the system.Just a quick question before I proceed,have you used the KRA online system?How do you like it so far? Let me explain,you could have the most expensive servers money can buy but if the developer of the system does not factor in concurrency in his code,you will end up with knec.ac.ke or kra.go.ke.Then of course after fixing the code,the correct server configuration and getting enough bandwidth,there is 'Level 8 of the OSI model' a.k.a policies that will ensure things run smoothly even after implementation.
Allow me to get very angry here.You see,there must be someone somewhere who sits down and thinks to himself/herself,"I know I am in charge of this website and last year it went belly up.However,that did not affect my life at all!If these people want to check their results,why are they in such a hurry and yet they can go to their schools after a day or two and check the results?This is Kenya bana,wasiniharakishe!". How else would you explain the same problem occurring year in year out?In addition to this,you and I will complain on this list but come Friday we will have moved on to other 'important' issues and forgotten about the incidence until February 2013!We will not even try to give a solution to the relevant parties!
Do you see why other countries keep calling us a Third World Country?If we cannot fix a simple website,why are we so concerned with building a WHOLE TECHNOLOGY CITY and 'becoming a major outsourcing hub'? My 2 cents...
On 29 February 2012 22:57, Rad! <conradakunga@gmail.com> wrote:
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com>wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Kind Regards,
Moses Muya.
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Wanjiru Waweru Nairobi, KE

True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles (2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all. --- On Wed, 2/29/12, Rad! <conradakunga@gmail.com> wrote: From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM Why are we assuming the problem is the infrastructure? On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote: Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift? On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote: But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year. So, in addition to lamenting here, why don't we think a lil bit outside the box; We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs. But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job. On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------

@Peter, That assertion actually reflects poorly on the KNEC ICT guys. In Japan, over 60% of annual chocolate sales is done on Feb 14th. So, KNEC would be one of those chocolate shops that doesn't stock enough, and says "HAH! Why is everyone rushing to buy chocolates? Anyway, this is a one day shortages. After today, business will be just fine" Enough bashing, then. Solutions? Can't think of any at the moment, but above analogy points to a potential solution: If I were an Ad network (InMobi, Google etc), or even just an advertiser in education sector, I would pay KNEC to run their website for those few days) The other suggestion is similar to what you say. Allow multiple content providers access to that info. This would create potentially beneficial competition. --- On Wed, 2/29/12, Peter Karunyu <pkarunyu@gmail.com> wrote: From: Peter Karunyu <pkarunyu@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 11:52 PM A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year. So, in addition to lamenting here, why don't we think a lil bit outside the box; We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs. But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job. On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote: True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles (2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all. --- On Wed, 2/29/12, Rad! <conradakunga@gmail.com> wrote: From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM Why are we assuming the problem is the infrastructure? On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote: Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift? On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote: But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -- Regards, Peter Karunyu ------------------- -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system. On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com <mailto:b_owuor@yahoo.com>> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! /<conradakunga@gmail.com <mailto:conradakunga@gmail.com>>/* wrote:
From: Rad! <conradakunga@gmail.com <mailto:conradakunga@gmail.com>> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke <mailto:skunkworks@lists.my.co.ke>> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbu~ru~ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <mailto:Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side. On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something? On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing listSkunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribehttp://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Ruleshttp://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri

@Solomon, kindly oblige me with questions below... Lets assume a traffic of 1.5 million users. Since there were about 400,000 candidates, each one of them submits a request, and each one tells at most 3 siblings to do the same :-) On Thu, Mar 1, 2012 at 10:24 AM, solomon kariri <solomonkariri@gmail.com>wrote:
Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side.
They will need a PHP file @ the server side to service this JSON request, no? And I think there is no processing per se; all they are doing is fetch data, display data.
On the server, they can simply load all the records on an array and sort on index number.
Assuming they are using PHP, an array might not cut it since it will have to be created for each request. 1.5m requests is a tad too many. On the other hand, if they have an in-memory MySQL table indexed on the candidates index number, the entire table is loaded into RAM, making it a bit faster. Making the candidates index number column not allow NULLs and then use it in the WHERE clause will probably make the search results query really really fast. Secondly, playing around with key_buffer_size, they can actually load the entire index onto RAM, making searches even faster! That index number can actually be treated as a long, so no complex
comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved.
You know, why are we searching in the first place? The data is read only! So why not adopt a strategy of search-once-display-many-times? If a candidate is searched the first time, cache the results and display the cached results to the other 3 siblings! But wait a minute, we know that at most 400,000 students will search, so why not search for them before they do and cache the results? Write a simple routine which outputs the results for all these students to static files. If we are dealing with static files, then we can get rid of Apache and instead use Nginx or LightHTTPD.
If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something?
If only the techies there are diligent, they can solve this problem with zero cost since all the tools and solutions they need are open source.
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com>wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke<http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing listSkunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribehttp://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Ruleshttp://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------

On Thu, Mar 1, 2012 at 10:43 AM, Peter Karunyu <pkarunyu@gmail.com> wrote:
@Solomon, kindly oblige me with questions below...
Lets assume a traffic of 1.5 million users. Since there were about 400,000 candidates, each one of them submits a request, and each one tells at most 3 siblings to do the same :-)
Ok so 1.5 million simultaneous requests, each request will take at most 10 milliseconds so we will need 15 million milliseconds to do this so thats 15,000 seconds, lets assme we have 1000 threads servicing concurrently, so we will require 15 seconds to service this. 15 seconds, that's so little taking into account that its really prectically impossible to have 1.5 million requests happening at EXACTLY the same time. so, that will be no bottle neck.
On Thu, Mar 1, 2012 at 10:24 AM, solomon kariri <solomonkariri@gmail.com>wrote:
Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side.
They will need a PHP file @ the server side to service this JSON request, no? And I think there is no processing per se; all they are doing is fetch data, display data.
Well first of all, I wont use PHP. If it was me, I would use Java, that's what I'm good at and what I can explain things with very easily. So they will have a servlet, the data will be loaded into a static array first time the server starts and the array stays in memory forever.
On the server, they can simply load all the records on an array and sort on index number.
Assuming they are using PHP, an array might not cut it since it will have to be created for each request. 1.5m requests is a tad too many. On the other hand, if they have an in-memory MySQL table indexed on the candidates index number, the entire table is loaded into RAM, making it a bit faster. Making the candidates index number column not allow NULLs and then use it in the WHERE clause will probably make the search results query really really fast.
With my Java approach, the array is created only once when the server starts. I dont know much about how php does this. Well for the MYSQL thing, its most likely to keep going back to the file system one in a while, what I want is a system that NEVER goes back to the hard disk to look up anything, all the information is in RAM. we already know index numbers are unique, and we have them already, so no need for not allowing nulls and no SQL queries run anywhere, SQl qeries need to be parsed and optimized, I dont know how well MYSQL does this or its query caching protocol but all in all with my approach, MYSQL doesn't come up anywhere except the forst time the server starts and the data is loaded into the array.
Secondly, playing around with key_buffer_size, they can actually load the entire index onto RAM, making searches even faster!
This is totally unnecessary with my approach.
That index number can actually be treated as a long, so no complex
comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved.
You know, why are we searching in the first place? The data is read only! So why not adopt a strategy of search-once-display-many-times? If a candidate is searched the first time, cache the results and display the cached results to the other 3 siblings!
No, I wouldnt suggest caching on the server side, just the client side, we
can make the javascript use GET protocol and tell browser that the results are cacheable. That way the same requests happening from the same browser will use the cache. For the server side, the RAM speed is quite high and we dont want to use up so mch RAM storing caches of every result.
But wait a minute, we know that at most 400,000 students will search, so why not search for them before they do and cache the results? Write a simple routine which outputs the results for all these students to static files.
NO not at all, that will involve disk access, disk access is usually very slow compared to RAM and Processr Speed, we are trying as mch as possible to avoid ANY disk access.
If we are dealing with static files, then we can get rid of Apache and instead use Nginx or LightHTTPD.
So we cant use this because the file system based approach is not recommended.
If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something?
If only the techies there are diligent, they can solve this problem with zero cost since all the tools and solutions they need are open source.
Actually, I can add something here to make it more efficient. Seek times in disks are usually slow. Disks are quite good at batch writes though. So instead of having to save the logs to disk/database directly, the thread responsible for this simply blocks access to the incoming qeue lock for about 5ms every 2 mins, creates a new empty one and keeps a copy of the current one. RAM copying is quite fast, it will be just a matter of memory reference change to the newly created queue. then it unblocks and queueing can continue, then instead of processing the copied queue it simply serializes it in a batch write to the disk and frees the space it was occupying in RAM leaving the space available for new queueing. The serialized qeues can then be processed later even in another machine.
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com>wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke<http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing listSkunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribehttp://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Ruleshttp://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri

The problem at KNEC could be a bad app/db design, inadequate infrastructure, or a conspiracy… I can’t tell for sure, but I can draw from my experiences having been down this path. In the case I was involved in, we threw more of everything at our implementation: hours, skills, relationships, life, bandwidth, hardware and even more of hardware, but it would only work for a while. We quit shared hosting and setup dedicated servers, wapi! Over the years, here are some of the pointers I know for sure work: 1) Caching is king. Much like man shall not live on bread alone, so shan’t web servers functionality rely on disk-based databases alone… There are no alternatives, especially for high volume traffic. Try Googling: a. Memcache b. PHP APC c. Apache mod_proxy or d. Squid among others. Each has its best scenario implementation 2) The web-facing side of your application ought to see your DB as storage and storage alone, not as a runtime working set as you would with a desktop application. If you need to join two or three tables to come up with content for a page, join those tables offline into a different table not while your users are staring at their screens. 3) Cache as much as you can at your application level. When you are done caching, kindly cache some more! Note: Use of RDBMS to maintain user sessions or authentication beyond the initial name/password matching is 90’s programming, you need to enroll for that chipuka certification thingy! There is a good reason why servers come with several GBs of RAM nowadays as standard, 4) When you cache in RAM, you will benefit from extra hard disk life. Ask folks who play MP3s from mechanical disks how often theirs crash While relational databases are convenient in terms of retrieval, they deliver this at a comparatively heavy footprint with respect to system resources. This is of course worsened by poor/inefficient indices and general bad database/app design Respect standards 5) When you cannot for the life of you cache anymore, tell others to cache for you via http headers like: a. Cache-Control: b. Expires: c. Last-Modified: 6) Just like in an KNEC exam (sic) answer what you are asked and not with your own random stuff. Process and obey http headers like these below, and respond accordingly: a. If-modified-since b. If-match c. If-non-match etc If you had been caching from points 1 – 3, it ought to be a trivial exercise. If you obey the above two, my experience shows you could eliminate up to 30% or more of processing requirements, and subsequent bandwidth requirements 7) Speed that thrills does not kill: If you have been caching and observing standards so far, you will have noticed an increase in speed. Dispense off the request quickly. a. If you use keep-alives whether at web server level or DB level using persistent connections reconsider. This is contentious, but in my experience, keep alives in many instances chew up valuable resources idling, waiting and generally being of no use. b. The web browsers do not give marks for good – or poorly implemented “for”, “while” and “repeat” loops nor fancy problem solving code, instead they timeout. If you have complex logic to implement, do it via stored procedures – as the data goes into the DB not while its being retrieved, spare your scripts for rendering, spare your server some CPU cycles, and spare us the environment. c. I am sure you optimized your queries to the best of your abilities 8) Size matters: a. Compress your output. b. Eliminate whitespace in your generated HTML if you can. This is easy if you use templates for your pages that are filled in by the scripts. You could have indented sources on dev, that you “compile” to remove white-space and “check-out” into prod. c. Your .js and .css stuff ought to be pre-compressed into .gz and served as such. For ye Joomla fanatics, join up your JS and CSS assets into as few files as possible. Spare your server the agony of serving 20+ 10KB files 9) Benchmark to know your limits. Apache has a nice tool called ab for this kind of thing. And when you reach your limits, bow out gracefully like the educated fellow you are, not like the habitual drunk at the locals. 10) Most importantly monitor your parameters: bandwidth, disk activity, RAM utilization, hardware health, power consumption etc, and compare these to number users online. If possible do what the telcos do with their ARPU metric, reduce your entire server(s) operation to a per-user metric e.g. bandwidth required per so many users or per unit of revenue, or any other combos that make business sense. This are my 2 bits scraped from here and there over the years, to mitigate high load. We did trip on load afterwards, but the above steps had set idle several servers from our initial buying frenzy Regards From: skunkworks-bounces@lists.my.co.ke [mailto:skunkworks-bounces@lists.my.co.ke] On Behalf Of solomon kariri Sent: Thursday, March 01, 2012 11:13 AM To: Skunkworks Mailing List Subject: Re: [Skunkworks] KNEC WEBSITE On Thu, Mar 1, 2012 at 10:43 AM, Peter Karunyu <pkarunyu@gmail.com> wrote: @Solomon, kindly oblige me with questions below... Lets assume a traffic of 1.5 million users. Since there were about 400,000 candidates, each one of them submits a request, and each one tells at most 3 siblings to do the same :-) Ok so 1.5 million simultaneous requests, each request will take at most 10 milliseconds so we will need 15 million milliseconds to do this so thats 15,000 seconds, lets assme we have 1000 threads servicing concurrently, so we will require 15 seconds to service this. 15 seconds, that's so little taking into account that its really prectically impossible to have 1.5 million requests happening at EXACTLY the same time. so, that will be no bottle neck. On Thu, Mar 1, 2012 at 10:24 AM, solomon kariri <solomonkariri@gmail.com> wrote: Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side. They will need a PHP file @ the server side to service this JSON request, no? And I think there is no processing per se; all they are doing is fetch data, display data. Well first of all, I wont use PHP. If it was me, I would use Java, that's what I'm good at and what I can explain things with very easily. So they will have a servlet, the data will be loaded into a static array first time the server starts and the array stays in memory forever. On the server, they can simply load all the records on an array and sort on index number. Assuming they are using PHP, an array might not cut it since it will have to be created for each request. 1.5m requests is a tad too many. On the other hand, if they have an in-memory MySQL table indexed on the candidates index number, the entire table is loaded into RAM, making it a bit faster. Making the candidates index number column not allow NULLs and then use it in the WHERE clause will probably make the search results query really really fast. With my Java approach, the array is created only once when the server starts. I dont know much about how php does this. Well for the MYSQL thing, its most likely to keep going back to the file system one in a while, what I want is a system that NEVER goes back to the hard disk to look up anything, all the information is in RAM. we already know index numbers are unique, and we have them already, so no need for not allowing nulls and no SQL queries run anywhere, SQl qeries need to be parsed and optimized, I dont know how well MYSQL does this or its query caching protocol but all in all with my approach, MYSQL doesn't come up anywhere except the forst time the server starts and the data is loaded into the array. Secondly, playing around with key_buffer_size, they can actually load the entire index onto RAM, making searches even faster! This is totally unnecessary with my approach. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. You know, why are we searching in the first place? The data is read only! So why not adopt a strategy of search-once-display-many-times? If a candidate is searched the first time, cache the results and display the cached results to the other 3 siblings! No, I wouldnt suggest caching on the server side, just the client side, we can make the javascript use GET protocol and tell browser that the results are cacheable. That way the same requests happening from the same browser will use the cache. For the server side, the RAM speed is quite high and we dont want to use up so mch RAM storing caches of every result. But wait a minute, we know that at most 400,000 students will search, so why not search for them before they do and cache the results? Write a simple routine which outputs the results for all these students to static files. NO not at all, that will involve disk access, disk access is usually very slow compared to RAM and Processr Speed, we are trying as mch as possible to avoid ANY disk access. If we are dealing with static files, then we can get rid of Apache and instead use Nginx or LightHTTPD. So we cant use this because the file system based approach is not recommended. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something? If only the techies there are diligent, they can solve this problem with zero cost since all the tools and solutions they need are open source. Actually, I can add something here to make it more efficient. Seek times in disks are usually slow. Disks are quite good at batch writes though. So instead of having to save the logs to disk/database directly, the thread responsible for this simply blocks access to the incoming qeue lock for about 5ms every 2 mins, creates a new empty one and keeps a copy of the current one. RAM copying is quite fast, it will be just a matter of memory reference change to the newly created queue. then it unblocks and queueing can continue, then instead of processing the copied queue it simply serializes it in a batch write to the disk and frees the space it was occupying in RAM leaving the space available for new queueing. The serialized qeues can then be processed later even in another machine. On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote: Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system. On 3/1/2012 8:52 AM, Peter Karunyu wrote: A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year. So, in addition to lamenting here, why don't we think a lil bit outside the box; We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs. But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job. On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote: True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles (2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all. --- On Wed, 2/29/12, Rad! <conradakunga@gmail.com> wrote: From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM Why are we assuming the problem is the infrastructure? On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote: Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift? On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote: But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke -- Regards, Peter Karunyu ------------------- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke -- Solomon Kariri, Software Developer, Cell: +254736 729 450 <tel:%2B254736%20729%20450> Skype: solomonkariri _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke -- Regards, Peter Karunyu ------------------- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24 <http://my.co.ke/phpbb/viewtopic.php?f=24&t=94> &t=94 ------------ Other services @ http://my.co.ke -- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri

@Eugene, nice! very nice. Me copying, printing and sticking on my wall. On Thu, Mar 1, 2012 at 2:07 PM, Eugene Lidede (Synergy) < eugene@synergy.co.ke> wrote:
The problem at KNEC could be a bad app/db design, inadequate infrastructure, or a conspiracy… I can’t tell for sure, but I can draw from my experiences having been down this path.****
** **
In the case I was involved in, we threw more of everything at our implementation: hours, skills, relationships, life, bandwidth, hardware and even more of hardware, but it would only work for a while. We quit shared hosting and setup dedicated servers, *wapi*! ****
** **
Over the years, here are some of the pointers I know for sure work:****
** **
**1) **Caching is king. Much like man shall not live on bread alone, so shan’t web servers functionality rely on disk-based databases alone… There are no alternatives, especially for high volume traffic. Try Googling: ****
**a. **Memcache****
**b. **PHP APC****
**c. **Apache mod_proxy or****
**d. **Squid among others. Each has its best scenario implementation* ***
**2) **The web-facing side of your application ought to see your DB as storage and storage alone, not as a runtime working set as you would with a desktop application. If you need to join two or three tables to come up with content for a page, join those tables offline into a different table not while your users are staring at their screens.****
**3) **Cache as much as you can at your application level. When you are done caching, kindly cache some more!****
Note: Use of RDBMS to maintain user sessions or authentication beyond the initial name/password matching is 90’s programming, you need to enroll for that chipuka certification thingy! There is a good reason why servers come with several GBs of RAM nowadays as standard,****
**4) **When you cache in RAM, you will benefit from extra hard disk life. Ask folks who play MP3s from mechanical disks how often theirs crash ****
While relational databases are convenient in terms of retrieval, they deliver this at a comparatively heavy footprint with respect to system resources. This is of course worsened by poor/inefficient indices and general bad database/app design****
** **
Respect standards****
**5) **When you cannot for the life of you cache anymore, tell others to cache for you via http headers like:****
**a. **Cache-Control:****
**b. **Expires:****
**c. **Last-Modified:****
**6) **Just like in an KNEC exam (sic) answer what you are asked and not with your own random stuff. Process and obey http headers like these below, and respond accordingly:****
**a. **If-modified-since****
**b. **If-match ****
**c. **If-non-match etc****
If you had been caching from points 1 – 3, it ought to be a trivial exercise. If you obey the above two, my experience shows you could eliminate up to 30% or more of processing requirements, and subsequent bandwidth requirements****
**7) **Speed that thrills does not kill: If you have been caching and observing standards so far, you will have noticed an increase in speed. Dispense off the request quickly.****
**a. **If you use keep-alives whether at web server level or DB level using persistent connections reconsider. This is contentious, but in my experience, keep alives in many instances chew up valuable resources idling, waiting and generally being of no use.****
**b. **The web browsers do not give marks for good – or poorly implemented “for”, “while” and “repeat” loops nor fancy problem solving code, instead they timeout. If you have complex logic to implement, do it via stored procedures – as the data goes into the DB not while its being retrieved, spare your scripts for rendering, spare your server some CPU cycles, and spare us the environment. ****
**c. **I am sure you optimized your queries to the best of your abilities****
**8) **Size matters:****
**a. **Compress your output. ****
**b. **Eliminate whitespace in your generated HTML if you can. This is easy if you use templates for your pages that are filled in by the scripts. You could have indented sources on dev, that you “compile” to remove white-space and “check-out” into prod. ****
**c. **Your .js and .css stuff ought to be pre-compressed into .gz and served as such. For ye Joomla fanatics, join up your JS and CSS assets into as few files as possible. Spare your server the agony of serving 20+ 10KB files****
**9) **Benchmark to know your limits. Apache has a nice tool called ab for this kind of thing. And when you reach your limits, bow out gracefully like the educated fellow you are, not like the habitual drunk at the locals. ****
**10) **Most importantly monitor your parameters: bandwidth, disk activity, RAM utilization, hardware health, power consumption etc, and compare these to number users online. If possible do what the telcos do with their ARPU metric, reduce your entire server(s) operation to a per-user metric e.g. bandwidth required per so many users or per unit of revenue, or any other combos that make business sense.****
** **
This are my 2 bits scraped from here and there over the years, to mitigate high load. We did trip on load afterwards, but the above steps had set idle several servers from our initial buying frenzy****
** **
** **
Regards****
** **
* *
*From:* skunkworks-bounces@lists.my.co.ke [mailto: skunkworks-bounces@lists.my.co.ke] *On Behalf Of *solomon kariri *Sent:* Thursday, March 01, 2012 11:13 AM *To:* Skunkworks Mailing List
*Subject:* Re: [Skunkworks] KNEC WEBSITE****
** **
** **
On Thu, Mar 1, 2012 at 10:43 AM, Peter Karunyu <pkarunyu@gmail.com> wrote: ****
@Solomon, kindly oblige me with questions below...
Lets assume a traffic of 1.5 million users. Since there were about 400,000 candidates, each one of them submits a request, and each one tells at most 3 siblings to do the same :-)****
Ok so 1.5 million simultaneous requests, each request will take at most 10 milliseconds so we will need 15 million milliseconds to do this so thats 15,000 seconds, lets assme we have 1000 threads servicing concurrently, so we will require 15 seconds to service this. 15 seconds, that's so little taking into account that its really prectically impossible to have 1.5 million requests happening at EXACTLY the same time. so, that will be no bottle neck.****
On Thu, Mar 1, 2012 at 10:24 AM, solomon kariri <solomonkariri@gmail.com> wrote:****
Ok in my opinion,****
All this data is read only,****
Its so little it can fit into RAM****
I believe the limit should be bandwidth, ok lets assume this implementation,****
First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser.****
They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side.****
They will need a PHP file @ the server side to service this JSON request, no? And I think there is no processing per se; all they are doing is fetch data, display data.****
Well first of all, I wont use PHP. If it was me, I would use Java, that's what I'm good at and what I can explain things with very easily. So they will have a servlet, the data will be loaded into a static array first time the server starts and the array stays in memory forever.****
****
****
On the server, they can simply load all the records on an array and sort on index number. ****
Assuming they are using PHP, an array might not cut it since it will have to be created for each request. 1.5m requests is a tad too many. On the other hand, if they have an in-memory MySQL table indexed on the candidates index number, the entire table is loaded into RAM, making it a bit faster. Making the candidates index number column not allow NULLs and then use it in the WHERE clause will probably make the search results query really really fast.****
With my Java approach, the array is created only once when the server starts. I dont know much about how php does this. Well for the MYSQL thing, its most likely to keep going back to the file system one in a while, what I want is a system that NEVER goes back to the hard disk to look up anything, all the information is in RAM. we already know index numbers are unique, and we have them already, so no need for not allowing nulls and no SQL queries run anywhere, SQl qeries need to be parsed and optimized, I dont know how well MYSQL does this or its query caching protocol but all in all with my approach, MYSQL doesn't come up anywhere except the forst time the server starts and the data is loaded into the array.****
Secondly, playing around with key_buffer_size, they can actually load the entire index onto RAM, making searches even faster!****
This is totally unnecessary with my approach.****
****
** **
That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved.****
You know, why are we searching in the first place? The data is read only! So why not adopt a strategy of search-once-display-many-times? If a candidate is searched the first time, cache the results and display the cached results to the other 3 siblings!****
No, I wouldnt suggest caching on the server side, just the client side, we can make the javascript use GET protocol and tell browser that the results are cacheable. That way the same requests happening from the same browser will use the cache. For the server side, the RAM speed is quite high and we dont want to use up so mch RAM storing caches of every result.****
****
But wait a minute, we know that at most 400,000 students will search, so why not search for them before they do and cache the results? Write a simple routine which outputs the results for all these students to static files.****
NO not at all, that will involve disk access, disk access is usually very slow compared to RAM and Processr Speed, we are trying as mch as possible to avoid ANY disk access.****
****
If we are dealing with static files, then we can get rid of Apache and instead use Nginx or LightHTTPD.****
So we cant use this because the file system based approach is not recommended. ****
****
If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. ****
So I just wonder, is this so hard to implement or I'm I missing something? ****
If only the techies there are diligent, they can solve this problem with zero cost since all the tools and solutions they need are open source. ****
Actually, I can add something here to make it more efficient. Seek times in disks are usually slow. Disks are quite good at batch writes though. So instead of having to save the logs to disk/database directly, the thread responsible for this simply blocks access to the incoming qeue lock for about 5ms every 2 mins, creates a new empty one and keeps a copy of the current one. RAM copying is quite fast, it will be just a matter of memory reference change to the newly created queue. then it unblocks and queueing can continue, then instead of processing the copied queue it simply serializes it in a batch write to the disk and frees the space it was occupying in RAM leaving the space available for new queueing. The serialized qeues can then be processed later even in another machine.****
** **
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:****
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system. ****
On 3/1/2012 8:52 AM, Peter Karunyu wrote: ****
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
****
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:** **
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:****
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM ****
** **
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:****
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?****
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote: ****
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
** **
** **
-----Inline Attachment Follows----- ****
** **
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
-- Regards, Peter Karunyu -------------------
****
_______________________________________________****
Skunkworks mailing list****
Skunkworks@lists.my.co.ke****
------------****
List info, subscribe/unsubscribe****
http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks****
------------****
** **
Skunkworks Rules****
http://my.co.ke/phpbb/viewtopic.php?f=24&t=94****
------------****
Other services @ http://my.co.ke****
** **
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
****
** **
-- ****
Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri****
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
-- Regards, Peter Karunyu -------------------****
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke****
****
** **
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri****
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------

Hi everyone, Good list Eugene. Allow me to add a few comments where I think there is more to it than this (see inline comments). On 3/1/12 2:07 PM, Eugene Lidede (Synergy) wrote:
1)Caching is king. Much like man shall not live on bread alone, so shan't web servers functionality rely on disk-based databases alone... There are no alternatives, especially for high volume traffic. Try Googling:
amen - but most web-apps will gain more with front-end optimization instead of back-end optimization. See: http://www.youtube.com/watch?v=BTHvs3V8DBA
d.Squid among others. Each has its best scenario implementation
use Varnish instead of squid, it is SO much more efficient (PHK is a genius).
2)The web-facing side of your application ought to see your DB as storage and storage alone, not as a runtime working set as you would with a desktop application. If you need to join two or three tables to come up with content for a page, join those tables offline into a different table not while your users are staring at their screens.
I disagree with this point - doing precalculation/data-warehouse or whatever you want to call it is an extreme measure, RDBMS' are extremely fast at combining relational data (if you have the right indexes) and having to maintain a separate cache in your data-storage is troublesome. Personally I often/almost have a serverside-cache of the final HTML page since despite being dynamic in nature the frequency of which pages change is low compared to the amount of time it is read. But creating a separate pre-joined table I would do extremely rare..
3)Cache as much as you can at your application level. When you are done caching, kindly cache some more!
Note: Use of RDBMS to maintain user sessions or authentication beyond the initial name/password matching is 90's programming, you need to enroll for that chipuka certification thingy! There is a good reason why servers come with several GBs of RAM nowadays as standard,
Sometimes old-school is the best school ;-) Storing sessions and authentication in RDBMS enables you to easily migrate and scale live sessions across multiple servers!
5)When you cannot for the life of you cache anymore, tell others to cache for you via http headers like:
a.Cache-Control:
b.Expires:
c.Last-Modified:
Generally this should be the first thing to do rather than the last... and don't forget ETags they are extremely useful for dynamic content.
b.The web browsers do not give marks for good -- or poorly implemented "for", "while" and "repeat" loops nor fancy problem solving code, instead they timeout. If you have complex logic to implement, do it via stored procedures -- as the data goes into the DB not while its being retrieved, spare your scripts for rendering, spare your server some CPU cycles, and spare us the environment.
Have never been a fan of stored-procedures. The work that needs to be done is the same, if its done in the database or in your code is in most cases of little difference. Database-vendors on the other hand loves to teach you how to do them, since once you have build stored-procedures for "their" DB then your application is tied closely into that DBMS. I would argue that pushing it to the DB makes your solution less scalable since the DB is usually the bottleneck if/when you scale your application across multiple servers and in that case you want the DB to do as little work as possible.
8)Size matters:
b.Eliminate whitespace in your generated HTML if you can. This is easy if you use templates for your pages that are filled in by the scripts. You could have indented sources on dev, that you "compile" to remove white-space and "check-out" into prod.
Personally I do this - but in the bigger picture the gain is greatly offset by using compression, all those spaces become almost nothing.
9)Benchmark to know your limits. Apache has a nice tool called ab for this kind of thing. And when you reach your limits, bow out gracefully like the educated fellow you are, not like the habitual drunk at the locals.
ab is a poor-man's tool for performance testing... it is good to get a baseline stress-test, but it will tell you very little about real life performance. The best way to do it is to identify a set of usecases then distribute those according to your expected load and insert them into a tool like jmeter... I can highly recommend "The Art of application performance testing" by Ian Molyneaux - it is a real eye-opener book (or was for me when I read it). Then there is the more general architecture considerations - if you want to push the most out of one server then use a statefull model (J2EE / .NET) with lots of caching layers, if you want to scale easily across multiple servers then go for a stateless design with persistent caching. regards Mike

Reasons why caching, reverse proxy for this case may not be the solution. 1. Load is most likely generated from non 'cachable' content such as db load and other dynamic content. 2. This loads only lasts for 3-5 hours, twice per year. No long enough to build the cache. The solutions would probably lie in the following; 1. Scale the db to any number of nodes that would be optimal, with virtualization, this not need be an expensive exercise - mySQL clustering comes to mind since they use mySQL. 2. Optimize the existing database, eg max connections etc 3. Scale the web servers to any number of node that would be optimal (load balancers eg F5, Radware Cisco can do this easily) But the big questing is, is it worth the CAPEX and NOPEX, considering it happens for a very short period, twice per year. On 03/01/2012 11:01 PM, Michael Pedersen wrote:
Hi everyone,
Good list Eugene. Allow me to add a few comments where I think there is more to it than this (see inline comments).
On 3/1/12 2:07 PM, Eugene Lidede (Synergy) wrote:
1)Caching is king. Much like man shall not live on bread alone, so shan't web servers functionality rely on disk-based databases alone... There are no alternatives, especially for high volume traffic. Try Googling:
amen - but most web-apps will gain more with front-end optimization instead of back-end optimization. See: http://www.youtube.com/watch?v=BTHvs3V8DBA
d.Squid among others. Each has its best scenario implementation
use Varnish instead of squid, it is SO much more efficient (PHK is a genius).
2)The web-facing side of your application ought to see your DB as storage and storage alone, not as a runtime working set as you would with a desktop application. If you need to join two or three tables to come up with content for a page, join those tables offline into a different table not while your users are staring at their screens.
I disagree with this point - doing precalculation/data-warehouse or whatever you want to call it is an extreme measure, RDBMS' are extremely fast at combining relational data (if you have the right indexes) and having to maintain a separate cache in your data-storage is troublesome. Personally I often/almost have a serverside-cache of the final HTML page since despite being dynamic in nature the frequency of which pages change is low compared to the amount of time it is read. But creating a separate pre-joined table I would do extremely rare..
3)Cache as much as you can at your application level. When you are done caching, kindly cache some more!
Note: Use of RDBMS to maintain user sessions or authentication beyond the initial name/password matching is 90's programming, you need to enroll for that chipuka certification thingy! There is a good reason why servers come with several GBs of RAM nowadays as standard,
Sometimes old-school is the best school ;-) Storing sessions and authentication in RDBMS enables you to easily migrate and scale live sessions across multiple servers!
5)When you cannot for the life of you cache anymore, tell others to cache for you via http headers like:
a.Cache-Control:
b.Expires:
c.Last-Modified:
Generally this should be the first thing to do rather than the last... and don't forget ETags they are extremely useful for dynamic content.
b.The web browsers do not give marks for good -- or poorly implemented "for", "while" and "repeat" loops nor fancy problem solving code, instead they timeout. If you have complex logic to implement, do it via stored procedures -- as the data goes into the DB not while its being retrieved, spare your scripts for rendering, spare your server some CPU cycles, and spare us the environment.
Have never been a fan of stored-procedures. The work that needs to be done is the same, if its done in the database or in your code is in most cases of little difference. Database-vendors on the other hand loves to teach you how to do them, since once you have build stored-procedures for "their" DB then your application is tied closely into that DBMS. I would argue that pushing it to the DB makes your solution less scalable since the DB is usually the bottleneck if/when you scale your application across multiple servers and in that case you want the DB to do as little work as possible.
8)Size matters:
b.Eliminate whitespace in your generated HTML if you can. This is easy if you use templates for your pages that are filled in by the scripts. You could have indented sources on dev, that you "compile" to remove white-space and "check-out" into prod.
Personally I do this - but in the bigger picture the gain is greatly offset by using compression, all those spaces become almost nothing.
9)Benchmark to know your limits. Apache has a nice tool called ab for this kind of thing. And when you reach your limits, bow out gracefully like the educated fellow you are, not like the habitual drunk at the locals.
ab is a poor-man's tool for performance testing... it is good to get a baseline stress-test, but it will tell you very little about real life performance. The best way to do it is to identify a set of usecases then distribute those according to your expected load and insert them into a tool like jmeter... I can highly recommend "The Art of application performance testing" by Ian Molyneaux - it is a real eye-opener book (or was for me when I read it).
Then there is the more general architecture considerations - if you want to push the most out of one server then use a statefull model (J2EE / .NET) with lots of caching layers, if you want to scale easily across multiple servers then go for a stateless design with persistent caching.
regards Mike
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

Allow me some comments here ( read : IMHO ), as the thread is quite interesting. :-) - 1) *All that Open Source and FreeSoftware, na mambo bado?!* I share the annoyance with @Moses and @Wanjiru, fortunately these days when it comes to this sector I tend to just laugh it off. The essence of my age old argument on software development is the first lines in this point. How dare we disturb the holy balance of nature between being shown what to do and teaching what to do? :-) - 2) Obviously the demands placed on the system very high and really very few times a year but this does not mean the CAPEX is not worth it. The issue of a stand alone system is where the problem lies, why is the system not being used during the year to host e-education, schools management, provide access to teachers for reading materials, enable useful activities on the platform etc. The list can go on and on. BTW, by mid next year I'll have a simple schools management system running purely for code practise, so blame yourselves if it becomes popular purely by luck. :-) - 3) Db or no Db? Is it just a matter of indexing convenience as in the case of read only data it matters not whether thread safe or not. I'm still considering what to use on my current test project and am happy with using split flat files. The argument here is this : If the server can feed 1000s of http requests and responses to an html document, then why the DB dependencies? This is my amateur input here, so corrections welcome. -4) Tests. This is find quite interesting that no one has developed a testing platform for enterprise solutions. So how do people benchmark enterprise setups without even developing test tools to push the response/request/sessions? Am blank here. -5) Solution time : I think KNEC will need another approach, if they don't do as yet. What they need to do is for all those searching online for their results is to implement a registration system. So any student checking online on the results day will need to register their student ID and SMS number/Email account and will have a wait time of around 1 hour or so. Behind the processes, this ID will be matched to the result and put in an output queue. Once the queue is ready with the student data, it can send an alert on what page the student can find the results at. As @Peter already mentioned, splitting the files will ease up on the load. Rgds.

That's another alternative,I see no reason not to use it.However,on a personal level,I am very weary of these international service providers nowadays - the memories of SOPA and PIPA still linger in my mind!I believe these two bills are a foresight of things to come and you will never know when you have offended our American brothers enough to have your website pulled down without notice. In addition to the above,hosting it in the states really crushes our sense of national pride,doesn't it?It would be a statement from us acknowledging defeat and that is not what we want,especially not now that we are planning on exporting technology services to other countries.It kind of reminds you of those useless wagangas who keep telling people that they will make them rich and yet they wouldn't make themselves rich! The closest we could come to using cloud services would be Safaricom.However,as you might be aware already,they seem to have kept the details to themselves and a few 'chosen people' i.e those who are patient enough to send them emails or give them a call inquiring about their services!Why they would never display the details on their website still evades my imagination! On 2 March 2012 16:39, Mugambi Kimathi <skunkworksjahazi@gmail.com> wrote:
How about using Amazon AWS?
any ideas i know little of how it works but to me it means they will only scale up once an year.
MK
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Kind Regards, Moses Muya.

Amazon AWS is not that cheap, I have read 2 stories, 1 of how "normal hosting" VPS was cheaper, unless you consider CPU, and the second one of an iOS app that became popular overnight, and had a bill to much. They moved to their won servers

Good people, It is quite clear that there are many people here who would be willing to design a functional system that would replace the joke we have for the knec website.Eugene Lidede outlined a system quite clearly and Michael Perdersen went ahead to add some good points into it.Solomon also did a very nice job,I must say.To say that Skunks are able and willing to come up with a solution would be an understatement.However,we do not want to be classified as yet another group of talkers and 'non-doers' - which brings me to the reason for this post.Does anyone among us know the officials at KNEC/Ministry of Education that we could speak to and propose the solution to them? My thinking is a rather simple one.Let us approach the Ministry officials and bring their attention to what we have purposed our energies to do.Once they agree,we will come back here and polish the solutions given,each one of us contributing according to his/her skill-set.We will do a proposal,present it to the ministry and then commence work immediately. The resulting system will be offered as a gift to the people of Kenya by Skunkworks.Beautiful,ain't it? Do you ladies and gentlemen think this is possible?Kindly let me know what you think. On 3 March 2012 23:27, Dennis Kioko <dmbuvi@gmail.com> wrote:
Amazon AWS is not that cheap, I have read 2 stories, 1 of how "normal hosting" VPS was cheaper, unless you consider CPU, and the second one of an iOS app that became popular overnight, and had a bill to much. They moved to their won servers
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Kind Regards, Moses Muya.

Following your thoughts; If no one has contacts of the MinOf-Edu - you could approach Dr. Ndemo and he will point you out to the right direction - or prepare an audience for you. BR///

Sure, sounds wonderful, an open source exams results provisioning system. Im so willing to contribute whatever I can to get this going. On Mon, Mar 5, 2012 at 4:14 PM, ndungu stephen <ndungustephen@gmail.com>wrote:
Following your thoughts;
If no one has contacts of the MinOf-Edu - you could approach Dr. Ndemo and he will point you out to the right direction - or prepare an audience for you.
BR///
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri

On 5 March 2012 16:31, solomon kariri <solomonkariri@gmail.com> wrote:
Sure, sounds wonderful, an open source exams results provisioning system. Im so willing to contribute whatever I can to get this going.
On Mon, Mar 5, 2012 at 4:14 PM, ndungu stephen <ndungustephen@gmail.com>wrote:
Following your thoughts;
If no one has contacts of the MinOf-Edu - you could approach Dr. Ndemo and he will point you out to the right direction - or prepare an audience for you.
BR///
I can get someone who works at KNEC, although I don't know how much help he might give because of his position
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

On 5 March 2012 18:28, Dennis Kioko <dmbuvi@gmail.com> wrote:
Solomon, you can help by referral.
I'll get in touch next week once I speak with him.
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

if these guys were hosting on rackspace cloud, that ¨once in a year only ¨ spike would not even be charged, in addition to it being scaleable to that level..... but then again, we ought to be talking about the rackspaces in kenya, dont we? On Mon, Mar 5, 2012 at 7:00 PM, Solomon Mbũrũ Kamau <solo.mburu@gmail.com>wrote:
On 5 March 2012 18:28, Dennis Kioko <dmbuvi@gmail.com> wrote:
Solomon, you can help by referral.
I'll get in touch next week once I speak with him.
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- *“The twentieth century has been characterized by three developments of great political importance: the growth of democracy, the growth of corporate power, and the growth of corporate propaganda as a means of protecting corporate power against democracy”*

@Solomon, Very good analysis. The problem is many of the people offering solutions are simply programmers and not developers. This is why theory is very important. Anyone who has not studied the theory and analyiss of algorithms cannot understand the importance of achieving O(logn) in the worst case scenario for such a system Regards ________________________________ From: solomon kariri <solomonkariri@gmail.com> To: Skunkworks Mailing List <skunkworks@lists.my.co.ke> Sent: Thursday, March 1, 2012 10:24 AM Subject: Re: [Skunkworks] KNEC WEBSITE Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side. On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something? On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote: Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote: A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit
outside the box;
We propose a solution which not only works for this annual
occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of
accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure.
(1) You get a response from the server - this means there is sufficient bandwidth, and the
webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of
connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not
tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On Wed, 2/29/12, Rad! <conradakunga@gmail.com> wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ
Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course..
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://orion.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://orion.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

@Solomon, yeah, your analysis makes sense. I know not much abt Java but i get the essence. Unfortunately, the folks @ KNEC use PHP a lot, and I hear it is said that its best to use the technology the core team is adept at. On Thu, Mar 1, 2012 at 1:33 PM, Shadrack Mwaniki <shadrack_mwaniki@yahoo.com
wrote:
@Solomon, Very good analysis. The problem is many of the people offering solutions are simply programmers and not developers. This is why theory is very important. Anyone who has not studied the theory and analyiss of algorithms cannot understand the importance of achieving O(logn) in the worst case scenario for such a system
Regards
------------------------------ *From:* solomon kariri <solomonkariri@gmail.com>
*To:* Skunkworks Mailing List <skunkworks@lists.my.co.ke> *Sent:* Thursday, March 1, 2012 10:24 AM
*Subject:* Re: [Skunkworks] KNEC WEBSITE
Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side. On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something?
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing listSkunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribehttp://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Ruleshttp://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------

@Solomon You have stated the solution very clearly.As for the webserver,I would use both Apache and Nginx.Apache will be fronted with Nginx because it has a VERY SMALL memory footprint and is quite scalable compared to Apache.By default it handles around 1024 requests per second and with a reasonable amount of RAM,we could push that to about 12,000 Requests/Second(720,000 Requests/Minute).Theoretically,using this calculation,it would take about 2 minutes to service the requests. Apache is good with processing dynamic files but it also happens to be a resource hog.Nginx will run on port 80 and will forward any dynamic requests to Apache which will be on port 8080. The underlying O.S determines a lot.Since we want to keep the costs as low as possible without sacrificing performance,we will be using either FreeBSD or Ubuntu.Besides,Windows is no good with Nginx as it has a different way of handling event polling.The installation of either O.S will be a bare installation comprising only of a web-server,database server,application server(php or java) and SSH(will explain later why we need this) to save on the resources we have. With Ubuntu,we will be able to provision new server instances using 'Juju' based on the performance(FreeBSD geeks will let us know whether there is an equivalent of this on FreeBSD). The database set up is well explained,I won't go into that. Two things that are rather unconventional will have to be done here : 1.0 Have parents/guardians/students send their email addresses to the ministry.These will be sorted and saved ready for d-day.Immediately the results are released,everyone who gave their email address will have them safely in their inbox,hence no need to go to the website!We never know,this could cut about 100,000 requests from the web server if everything goes well. 2.0 The second option is rather unique and calls for true patriotism.There are many people on this list who can donate their bandwidth towards helping this cause.What if we all set up a sub-domain on our respective websites and mirrored the results there?It's simple,anyone who has the resources to do it will create something like knec.example.com or knec.example.co.ke.You will give the necessary access details to this sub domain and on d-day again,after the announcements have been made,the results will be Rsynced to that sub-directory and will appear on the sub-domain!The ministry will then display the mirrored sites on their website for anyone having trouble loading the knec website.This will only be done for a maximum of 3 days after the announcements then everyone can remove the sub domain and wait until the following year! Viola!Ladies and gentlemen,we might have the solution at hand...By Kenya For Kenyans! Corrections and improvements will be highly appreciated. On 1 March 2012 13:33, Shadrack Mwaniki <shadrack_mwaniki@yahoo.com> wrote:
@Solomon, Very good analysis. The problem is many of the people offering solutions are simply programmers and not developers. This is why theory is very important. Anyone who has not studied the theory and analyiss of algorithms cannot understand the importance of achieving O(logn) in the worst case scenario for such a system
Regards
------------------------------ *From:* solomon kariri <solomonkariri@gmail.com>
*To:* Skunkworks Mailing List <skunkworks@lists.my.co.ke> *Sent:* Thursday, March 1, 2012 10:24 AM
*Subject:* Re: [Skunkworks] KNEC WEBSITE
Ok in my opinion, All this data is read only, Its so little it can fit into RAM I believe the limit should be bandwidth, ok lets assume this implementation, First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side. On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something?
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit outside the box;
We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs.
But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote:
True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all.
--- On *Wed, 2/29/12, Rad! <conradakunga@gmail.com>* wrote:
From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM
Why are we assuming the problem is the infrastructure?
On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote:
Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift?
On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote:
But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-----Inline Attachment Follows-----
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke <http://mc/compose?to=Skunkworks@lists.my.co.ke> ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Regards, Peter Karunyu -------------------
_______________________________________________ Skunkworks mailing listSkunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribehttp://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Ruleshttp://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Solomon Kariri,
Software Developer, Cell: +254736 729 450 Skype: solomonkariri
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------
Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke
-- Kind Regards, Moses Muya.

I'm very happy that skunks are actually thinking of smart solutions to this "problem". But what really is the problem? "Database Error: Unable to connect to the database:Could not connect to MySQL" Looks like a single tweak on mysql's config file - my.ini - would fix this. Increase MySQL's max_connections variable. This might require more GBs of RAM, but that's all. However, the situation may stem from each query taking too long. This could be a result of either too many joins, or each query returning too much data. Or just a very slow server. With less than 5KB of data / student, the entire dataset snugly fits inside 2.5GB of RAM. MySQL is clever enough to cache this for you if you set it up correctly. Otherwise, the arrays in servlets is lovely. If I was the sysadmin, for this 1-2 day phenomenon, I'd replace KNEC's $5,000, 1GB, 1.66Ghz dual core servers with my $1,000 4GB 2.4 GHz quad core laptop. Heck, it's a few hours phenomenon, so I borrow my friends laptops, load up the same data in their MySQL servers and set up load balancing. Bernard --- On Thu, 3/1/12, Moses Muya <mouzmuyer@gmail.com> wrote: From: Moses Muya <mouzmuyer@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Shadrack Mwaniki" <shadrack_mwaniki@yahoo.com>, "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Thursday, March 1, 2012, 5:27 AM @Solomon You have stated the solution very clearly.As for the webserver,I would use both Apache and Nginx.Apache will be fronted with Nginx because it has a VERY SMALL memory footprint and is quite scalable compared to Apache.By default it handles around 1024 requests per second and with a reasonable amount of RAM,we could push that to about 12,000 Requests/Second(720,000 Requests/Minute).Theoretically,using this calculation,it would take about 2 minutes to service the requests. Apache is good with processing dynamic files but it also happens to be a resource hog.Nginx will run on port 80 and will forward any dynamic requests to Apache which will be on port 8080. The underlying O.S determines a lot.Since we want to keep the costs as low as possible without sacrificing performance,we will be using either FreeBSD or Ubuntu.Besides,Windows is no good with Nginx as it has a different way of handling event polling.The installation of either O.S will be a bare installation comprising only of a web-server,database server,application server(php or java) and SSH(will explain later why we need this) to save on the resources we have. With Ubuntu,we will be able to provision new server instances using 'Juju' based on the performance(FreeBSD geeks will let us know whether there is an equivalent of this on FreeBSD). The database set up is well explained,I won't go into that. Two things that are rather unconventional will have to be done here : 1.0 Have parents/guardians/students send their email addresses to the ministry.These will be sorted and saved ready for d-day.Immediately the results are released,everyone who gave their email address will have them safely in their inbox,hence no need to go to the website!We never know,this could cut about 100,000 requests from the web server if everything goes well. 2.0 The second option is rather unique and calls for true patriotism.There are many people on this list who can donate their bandwidth towards helping this cause.What if we all set up a sub-domain on our respective websites and mirrored the results there?It's simple,anyone who has the resources to do it will create something like knec.example.com or knec.example.co.ke.You will give the necessary access details to this sub domain and on d-day again,after the announcements have been made,the results will be Rsynced to that sub-directory and will appear on the sub-domain!The ministry will then display the mirrored sites on their website for anyone having trouble loading the knec website.This will only be done for a maximum of 3 days after the announcements then everyone can remove the sub domain and wait until the following year! Viola!Ladies and gentlemen,we might have the solution at hand...By Kenya For Kenyans! Corrections and improvements will be highly appreciated. On 1 March 2012 13:33, Shadrack Mwaniki <shadrack_mwaniki@yahoo.com> wrote: @Solomon,Very good analysis. The problem is many of the people offering solutions are simply programmers and not developers. This is why theory is very important. Anyone who has not studied the theory and analyiss of algorithms cannot understand the importance of achieving O(logn) in the worst case scenario for such a system Regards From: solomon kariri <solomonkariri@gmail.com> To: Skunkworks Mailing List <skunkworks@lists.my.co.ke> Sent: Thursday, March 1, 2012 10:24 AM Subject: Re: [Skunkworks] KNEC WEBSITE Ok in my opinion,All this data is read only, Its so little it can fit into RAMI believe the limit should be bandwidth, ok lets assume this implementation,First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser. They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side.On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved. If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time. So I just wonder, is this so hard to implement or I'm I missing something? On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote: Surprising they don't want to fix a problem that occurs only once a year yet the system is only relevant once a year. Its better not to offer a service than to offer a substandard service. They must build the required capacity or just kill the service altogether, otherwise its just a waste of resources. They probably an learn from electoral commission tallying system. On 3/1/2012 8:52 AM, Peter Karunyu wrote: A member of this list who knows someone in KNEC said here that they know what the problem is, they know how to fix it, they just don't see the logic in fixing a problem which occurs once a year. So, in addition to lamenting here, why don't we think a lil bit outside the box; We propose a solution which not only works for this annual occurrence, but also works for other problems they have which we don't know. For example, how about coming up with a solution which they can use to disseminate ALL exam results, not just KCSE, online? That should save then quite a bit in paper and printing costs. But I think the real cause of this problem is lack of accountability; the CIRT team @ CCK focuses solely on security, the Ministry of Info. focuses on policies, KICTB focuses on implementing some of those policies and a few other things, but not including quality of software. The directorate of e-government provides oversight on these systems. So if my opinions here are correct, someone @ Dr. Kate Getao's office is sleeping on the job. On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com> wrote: True. Fact that you can see "Failed connection to mysql DB" means that there's more than enough infrastructure. (1) You get a response from the server - this means there is sufficient bandwidth, and the webserver that hosts the app has sufficient CPU cycles (2) they're using mysql Apart from potential limitations in the the number of connections in windows, you can easily do 500 - 1000 simultaneous connections. Only one connection is needed, though, so this should not be an issue Obviously, the architecture is poor and the app is not tested. The developer really skimped on their computer science classes, or didn't have any at all. --- On Wed, 2/29/12, Rad! <conradakunga@gmail.com> wrote: From: Rad! <conradakunga@gmail.com> Subject: Re: [Skunkworks] KNEC WEBSITE To: "Skunkworks Mailing List" <skunkworks@lists.my.co.ke> Date: Wednesday, February 29, 2012, 1:57 PM Why are we assuming the problem is the infrastructure? On Wednesday, February 29, 2012, Solomon Mbũrũ Kamau wrote: Can we do a harambee, like the one we did, the other day, for the purchase of a server(s) for KNEC and give it to them as a gift? On 29 February 2012 17:38, ndungu stephen <ndungustephen@gmail.com> wrote: But of course.. _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -- Regards, Peter Karunyu ------------------- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -- Solomon Kariri, Software Developer, Cell: +254736 729 450 Skype: solomonkariri _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke -- Kind Regards, Moses Muya. -----Inline Attachment Follows----- _______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke ------------ List info, subscribe/unsubscribe http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke

Actually, it is simpler to set up Nginx by itself, and drop Apache. it works. I have seen Nginx handle 10 times a load that would bring down apache
participants (19)
-
aki
-
Alex Kamiru
-
Bernard Owuor
-
Collins Areba
-
Dennis Kioko
-
Eugene Lidede (Synergy)
-
James Kagwe
-
Michael Pedersen
-
Moses Muya
-
Mugambi Kimathi
-
ndungu stephen
-
Peter Karunyu
-
Peter Maingi
-
Rad!
-
Shadrack Mwaniki
-
solomon kariri
-
Solomon Mbũrũ Kamau
-
Wanjiru Waweru
-
wool row