Tag Archives: comp sci

Problem set to practice your new language katas

If you are trying to learn a new language, or improve your grasp of one, there are different approaches. But applying the language into something practical is always a solid approach. It forces you to take a concept and construct it in the language of choice. And with a good problem set, there will be levels of sophistication to the construction where you will be able to pull out facets and nuances of each language, its programming style (functional, imperative, etc.) and even, with multiple languages, be able to make informed comparisons between them. Need a problem set? Enter [Project Euler](http://projecteuler.net/) Continue reading Problem set to practice your new language katas

Google graph engine cloned in Goldenorb

Google releases the odd bit of research on what is happening in the mothership. In some cases, Google will publish research papers describing one of the proprietary platforms driving its back end, and to a certain degree this allows outside developers to mimic these platforms with open source projects. Google papers on its GFS distributed file system and MapReduce distributed number-crunching platform, for example, gave rise to the open source [Hadoop](http://hadoop.apache.org/), and a paper on its BigTable distributed database sparked the open source [HBase project](http://hbase.apache.org/). Google published news of [Pregel](http://googleresearch.blogspot.com/2009/06/large-scale-graph-computing-at-google.html) and now another OSS project has been brought from the forge. Continue reading Google graph engine cloned in Goldenorb

Some analysis of Royce's own concerns with the Waterfall model

Over at the *Art of SW Dev* is a very good post giving some historical analysis on [Waterfall vs Agile](http://sinnema313.wordpress.com/2010/01/16/waterfall-vs-agile/), using Royce’s original paper, and a good understanding of what is agile in the present day. It finds that Royce has been very unfairly mischaracterized. He found many flaws in the Waterfall (some suggest labelling this as *single-pass waterfall to give Royce credit for wanting iteration), and wanted things like full test coverage, people over process, etc.

It’s worth a parse and some thought.

Do you have any of the top 25 coding errors in your code?

This year’s list of the [top 25 coding errors]() was released by the [Common Weakness Enumeration]() project. Development teams and management should be aware of these trends and use them as quality requirements lists in their own development processes. Continue reading Do you have any of the top 25 coding errors in your code?

Cloud efficiencies for utilization and DCs

There is strong evidence out there that large to medium datacenters are running at 7-10x lower efficiencies than the large service centers being run by the likes of Amazon or Google. James Hamilton gave a [talk](http://mvdirona.com/jrh/work ) on this topic, with [slides](http://mvdirona.com/jrh/work) about the real costs and the innovations happening circa 2010. Continue reading Cloud efficiencies for utilization and DCs

Critique of Cloud Computing

[Here](http://thingsthatshouldbeeasy.blogspot.com/2009/10/stormy-skies-for-cloud-computing.html) is an old, but thoughtful post on some of the shortcomings of a hybrid/public cloud provision by Eugene Rosenfield. His main points of criticism are issues surrounding:

1. SSO – In most cases, this is going to break any SSO you have with your internal users if they are using something direct in Windows.
2. WAN vs LAN – bandwidth and reliability. WAN costs are something people won’t be thinking of, most of them thinking at LAN speeds.
3. System integration – Not certain that I agree with Eugene’s critique here. He claims that the services to be integrated are external to your network boundary increases the integration complexity. Why should any integration call be more or less difficult than any other depending on URL length (i need a FQDN for something external, perhaps) or hops?

Anyway, worth a parse. Thanks, Eugene.

Bitcoin – decentralized internet currency

I learned today about [Bitcoin](http://en.wikipedia.org/wiki/Bitcoin), a digital currency created in 2009 by Satoshi Nakamoto. The name refers both to the open source software he designed to make use of the currency and to the peer-to-peer network formed by running that software.
Bitcoin eschews central authorities and issuers, using a distributed database spread across nodes of a peer-to-peer network to track transactions. Bitcoin uses digital signatures and proof-of-work to provide basic security functions, such as ensuring that bitcoins can be spent only once per owner and only by the person who owns them.
Bitcoins, often abbreviated as BTC, can be saved on a personal computer in the form of a wallet file or kept with a third party wallet service, and in either case bitcoins can be sent over the Internet to anyone with a Bitcoin address. The peer-to-peer topology and lack of central administration are features that make it infeasible for any authority (governmental or otherwise) to manipulate the quantity of bitcoins in circulation, thereby mitigating inflation.[1] Continue reading Bitcoin – decentralized internet currency