Very good analysis. The problem is many of the people offering solutions are simply programmers and not developers.
This is why theory is very important. Anyone who has not studied the theory and analyiss of algorithms cannot understand the
importance of achieving O(logn) in the worst case scenario for such a system
Regards
From: solomon kariri <solomonkariri@gmail.com> To: Skunkworks Mailing List <skunkworks@lists.my.co.ke> Sent: Thursday, March 1, 2012 10:24 AM Subject: Re: [Skunkworks] KNEC WEBSITE
Ok in my opinion,
All this data is read only,
Its so little it can fit into RAM
I believe the limit should be bandwidth, ok lets assume this implementation,
First of all they get rid of that php file and replace it with a simple index.html, that way it will just be served, nothing processed to generate html, plus it will be cached by the browser.
They will then add a javascript that simply does an ajax query, receives a JSON response and generates the relevant html to display the JSON. That will move quite a lot of processing to the client side.
On the server, they can simply load all the records on an array and sort on index number. That index number can actually be treated as a long, so no complex comparison. The sorting will be done just once, when the server starts since the data doesn't change. This will take O(nlogn) time. that will be like 5 seconds on the maximum. For any requests, a binary search is done on the sorted data and response is offered immediately. Since the data doesn't change, they can have a pool of threads servicing the requests and performing the binary searches concurrently. All searches will take O(logn) time, that's like negligible for the amount of data involved.
If they want to keep access logs as well, well, that's pretty simple, they will create a simple in memory queue and add an entry to the queue and leave the process of writing that to disk/database to a separate thread or a number of threads, that way, the slow disk access speeds don't affect response time. With that, the only limit left will be the bandwidth. Actually with a 5mbps up and down link, they will be sorted, all people are looking for is text, most of the time.
So I just wonder, is this so hard to implement or I'm I missing something?
On Thu, Mar 1, 2012 at 9:51 AM, James Kagwe <kagwejg@gmail.com> wrote:
Surprising they don't want to fix a problem that occurs only once a
year yet the system is only relevant once a year. Its better not to
offer a service than to offer a substandard service. They must build
the required capacity or just kill the service altogether, otherwise
its just a waste of resources. They probably an learn from electoral
commission tallying system.
On 3/1/2012 8:52 AM, Peter Karunyu wrote:
A member of this list who knows someone in KNEC said here that
they know what the problem is, they know how to fix it, they just
don't see the logic in fixing a problem which occurs once a year.
So, in addition to lamenting here, why don't we think a lil bit
outside the box;
We propose a solution which not only works for this annual
occurrence, but also works for other problems they have which we
don't know. For example, how about coming up with a solution which
they can use to disseminate ALL exam results, not just KCSE,
online? That should save then quite a bit in paper and printing
costs.
But I think the real cause of this problem is lack of
accountability; the CIRT team @ CCK focuses solely on security,
the Ministry of Info. focuses on policies, KICTB focuses on
implementing some of those policies and a few other things, but
not including quality of software. The directorate of e-government
provides oversight on these systems. So if my opinions here are
correct, someone @ Dr. Kate Getao's office is sleeping on the job.
On Thu, Mar 1, 2012 at 8:11 AM, Bernard Owuor <b_owuor@yahoo.com>
wrote:
True. Fact that you can see "Failed
connection to mysql DB" means that there's more than
enough infrastructure.
(1) You get a response from the server
- this means there is sufficient bandwidth, and the
webserver that hosts the app has sufficient CPU cycles
(2) they're using mysql
Apart from potential limitations in the the number of
connections in windows, you can easily do 500 - 1000
simultaneous connections. Only one connection is
needed, though, so this should not be an issue
Obviously, the architecture is poor and the app is not
tested. The developer really skimped on their computer
science classes, or didn't have any at all.