It is hard to keep up with buzzwords in this trade. I just ran into the term IOP, which stands for input/outputs per second. IBM set a record for IOPs this month with more than 1
million input/outputs per second (IOP), with a one millisecond sustained response time. But here is the real gimmick; it was done on a 4 Terabyte array. This was not a “platter farm”,
but a solid state disk (SSD) array. Think of it as a memory stick on steroids. This hardware is physically smaller, five times faster and consumes less power than the traditional moving
platter drives we have been using for the last forty plus years.
Solid State Disks have been getting cheaper and memory sticks are replacing coffee mugs and tee shirts for speaker gratuities at trade shows. But I think this is more than Moore’s Law at
work. At some point, you cannot spin a disk faster, or transfer magnetic flux greater than an upper limit. Only so many physical read/write heads will pack in a given space.
Since databases are all about IOP, our part of the trade has spent a lot of time worrying about costing algorithms for conventional disks, — cache, garbage collection, read-head, data
compression, RAID distribution, and so forth. In a few years these algorithms will be as useful as knowing how to use a slide ruler. We spent a lot of time tuning moving platter disk
drives and get pretty good at it. But I don’t have a good idea of what tuning I will have to figure out on this new hardware. I am not sure that anyone knows yet.
Let’s throw another consideration into the mix. Intel and the other chip makers are telling us that there will be no more single core chips in a very few years. When I have enough
cheap processors, programming is not the same as it was.
While I could not put a physical read/write head over every track on a hard disk, there is a pretty good chance that I can assign a processor to contiguous blocks of flash memory. This is not
a new database design; WX2 and other VLDB products have had some parallelization either in proprietary hardware (database appliances) or with proprietary software that uses off-the-shelf blade
servers.
I want to stress the word “proprietary” in the last paragraph. If you look at the history of parallelism in more popular programming languages, it is not really good. FORTRAN and C
had a “fork and join” extension in some products, but that was not part of the Standards. Algol-60 had proposals for cobegin-coend blocks, but they were never made part of the language. You
could tell that these extensions were stuck on as an after thought to get to some operating system feature. Why would you spend a lot of time on language features for which there was no
hardware? And why would you think of them, if your whole mindset was locked into the von Neuman model of a computer?
SQL programmers have the advantage of working with a language that is naturally parallel because of its set-oriented nature. If you partition a set, then there are certain classes of
functions that can be applied to each partition and then union-ed back into a result set that gives us the function applied to the original set. In English, you can split up the job; have
everyone drop their part in the “outbox” and put it back together at the end of the day. If you want to read about the math, check out “Map Reduce Algorithms” on Google — this is the
basis of their search engine.
Erlan and a few other new, functional programming languages have parallelism in them from the beginning. This is a good thing. But the bad news is that we don’t have a lot of
programmers for these languages. The few programmers we have tended to be geeks and not business majors. While I still wear a pocket protector, there is more demand for payroll programs and
simple reports than for exotic algorithms.
The traditional sequential procedural language programmer is doing just fine today. The machinery he works with still fits his programming model. But that equipment is going to
disappear and he is going to be a 1950’s radio repairman in a world of iPods.
As an example of the mindset problems, I had an exchange on an SQL newsgroup last month with another old timer. He was still using a proprietary auto-increment as primary keys in his
tables. The feature counts the physical record insertion attempts (not successes) made to a table with this property added to it. It is a “code museum” piece left over from the old
days of contiguous storage, which in turn mimicked magnetic tape files and their record numbers. Those “magical numbers” are only good inside one piece of hardware, much like the pointer
chains and record numbers they mimic. For now, ignore that such a thing is also non-relational, has no validation and no verification.
He had never considered parallel, asynchronous insertions to the schema from hundreds or thousands of processors. And did we mention that the tables are in Petabytes, not Terabytes?
Queuing up that kind of volume in a single thread is not going to work. Traditional tree indexes will not work. In fact, a whole lot of things will not work.
By way of an analogy, he was identifying his car by the parking space number in a garage. What he really needed was the VIN number to get validation and verification, have an established
industry standard for data exchange with insurance companies, etc. What he is going to need very shortly is a GPS to follow his car while it is moving and an on-board computer to handle the
maintenance.