Friday, October 2, 2009

Importance of hackability

This is something I wrote down on my blackberry during one of those sleepless nights.


As I huddle in the darkness of my room with my Blackberry in my hand, listening to the mp3 rip of the Deus Ex soundtrack, and reading Cory Doctorow's the Makers serialized on the web, I am inevitably reminded of a question that had been plaguing me for a long time.

I read about the greatness of the old computer systems all the time, being something of a retrocomputing enthusiast. I read about all the wonderful stuff people did with their first Apple computers and Spectrum ZX, making and running crazy things on 128kb of memory. I also read about the demoscene where people squeeze old retro hardware to it last reserves of computing power to create fascinating works of pop art.

And then I look at my handset. I remember my old nokia running symbian, which is a modified version of linux fitted to run on mobile platform. I scrounge through the apps on my blackberry. Any modern handset I can remember using, and I can remember other people using is vastly superior to most of the retro hardware that are remembered so fondly. Even my older model Blackberry can kick the pants off the old Apple computer in terms of hardware spec, and it has much more sensors to boot, allowing it to communicate with the world through its eyes (camera) and location awareness (GPS). I'm not even going to talk about the always-on data connection because it's a given on any working handset.

Yet despite the reserves of computing power and amazing array of sensors available on this little buggers, they just don't seem to be able to inspire that same level of awe and creativity the first generation of personal computers did for its users. Just what happened? What is the difference between modern handsets and first generation PCs other than how superior many of the modern handsets are in terms of spec?

The answer I think, may have to do with the hackability of the handsets compared to the first generation PCs. First generation PCs were intended as computers. They had moderately sized screens (though the resolution was mostly worse than even the poorest smartphone out in the market today) and a full complement of input device you can use for extended period of time (just a full sized keyboard really, though it does make a difference). Yet these are still superficial hardware differences that can be made up for quite easily. Most high end Nokia phones support connecting to tv screens and what not, and most bluetooth equipped smartphones can interface with a bluetooth keyboard. Can, but not allowed to.

The biggest difference, perhaps the only difference, between the old first generation PCs and current handsets seem to be the software. The PCs were intended as computers meaning you were provided with the tools to develop new content for that platform, usually in form of BASIC implementation for the given system. It was possible to code in assembly and such if you were good enough. The best memories of the old systems and their wonders are almost always linked with the entry level development for the system.

You can't find that on handsets. Even Google Android doesn't yet provide a suitable platform on top of the mobile that can be used to manipulate the machine fully. There is a zero chance that a user of a blackberry handset would be able to run a code on the handset itself, and even linked to a full scale PC the road is usually long and confusing.

Granted, modern smartphone hardwares are complicated which necessitates (really?) the need for complex development environment. Yet, what if the mobile OS itself just gave the users just a slight bit more control to their own hardware? What if we can bring the modern BASIC equivalent like python onto the mobile OS, capable of interfacing with, and controlling the hardware?

Would it lead to another wave of developers and tinkerers world wide to create things that were completely unexpected?

Thursday, October 1, 2009

Alan Kay applied to synthetic biology... And other late night notes.

I always find it very hard to blog. Even when I have the time to write something, not necessarily sitting in front of a laptop, mind you (I'm rather known for writing stuff that needs some word processor access and sending it in straight from my handset). It's only that I always feel that whatever I'm writing or trying to write at the moment just doesn't feel exciting or important enough. Which is why I keep multiple blogs around the net, each serving as a rant template for the other. Something would begin as a rant template on place A only to be edited into another form for place B, to place C, so on and so forth before the same yet radically altered post ends up as a follow up at the place of origin.

I know I should be writing about some other things as well, like how the diybio nyc might be amazingly close to getting a real lab space, or how I'm prepping to stop by for this year's iGEM jamboree. Oh or the pictures from this year's major diybio nyc event, where we set up a stall on the green market and extracted dnas from the natural produces with common household material (with the city people of course). Each of those things would probably make for some lengthy and interesting reading, and the list goes on (my life's actually kind of exciting right now). Yet whenever I find the time to write something down, nada. Nothing. My mind just shuts down and nothing I can commit to paper or the keyboard seems good enough.


Tonight though, aided by my weird bout with insomnia, I'll just write something down I've been meaning to say for a long time. I'm not even spellcheck this thing (god save my soul).

I've been looking into the history of computing and computer languages recently. I've always had some level of interest in computers, not just the spiffy brand-new muscle machines but in what most people would refer to as 'retrocomputing' (I once ended up practicing some AIDA because of that. Ugh), which is a story for another time. It's not that I think old ways of computing were better than others. It's just that it's much easier to trace the evolution of the concept of computing when you see beyond the immediate commercial products.

Synthetic biology is effectively a pursuit of engineering biological organisms. Biological organisms are based upon very singular information storage and processing system that has quite a bit of parallels to computerized systems. I've been wondering whether it would be possible to predict the future development of synthetic biology by looking at how computer programming languages evolved (because they deal with information processing systems applied to physical counting medium). Maybe it might even be able to predict some of the pitfalls that are inherent in developing any kind of complex programmable information processing system that will apply to the synthetic biology in the future. Maybe it would be possible to bring a conceptual framework to the synthetic biology that would have taken decades if left to mature naturally within mere years.

While I was rummaging through the texts in both real life and the web (with many of the promising links on the web leading to dead-ends and 404s) I ran into a programming paradigm and environment I was only superficially familiar with before. Smalltalk and Squeak, respectively, both the brainchild of the computing pioneer Alan Kay.

Here's an excerpt from Alan Kay's biography I found on the net (I can't find the website right now. I swear I'll edit it in later, when my brain's actually working!)

“Alan Kay postulated that the ideal computer would function like a living organism; each “cell” would behave in accord with others to accomplish an end goal but would also be able to function autonomously. Cells could also regroup themselves in order to attack another problem or handle another function.”

This is the basic philosophy behind smalltalk/squeak and object oriented computer programming paradigm. It is no coincidence that Alan Kay’s vision of the ideal computer language and computing environment would take to a biological allegory, since he came from molecular biology background.

While I’m reading through the history of different computing paradigms for the purpose of figuring out how it might be applied to understanding and usage of synthetic biology, there’s something else I found awesome and perhaps a little heartwarming. Alan Kay throughout his life as a computing pioneer held onto the belief that the ideal computing platform isn’t a platform capable of crunching the numbers the fastest, but a platform that can be integrated into the educational function of the user through ease of manipulation and control. Ideal computing platform should be hackable because it makes logical sense to do so.

Can we say the same of synthetic biology? Perhaps not. The direct comparison of a complex biological system to computerized circuits and cathode ray tube projections can only take us so far. Yet I can’t shake the nagging feeling that synthetic biology might be looking at some very unique opportunities for change precisely because it is different from regular electronic systems, with documents of the early days of computer and programming already here for our perusal.

A good, elegant system that allows programmable extension must be at the same time easy, or at least logical to learn. And there are systems that both run and learn better compared to other systems. This might become something of an issue of how synthetic biology parts/devices/systems are put together in the future as the capacity of the synthetic biologists to handle complex systems increase.

I think it might be able to pursue this idea further. As it stands this is nothing more than an interesting parallel in concept without substantial scientific reasoning.

Which is why I should get myself to learn smalltalk/squeak sometime in the future. Maybe I should knock on the hackerspaces in the city, see if anyone's willing to mentor me.

Now, it's about time for me to get some sleep.