2007-04-01

Language of the year

For me, this year's language is assembly. I did a little 6502 assembly as a teen - wrote a dissembler for the BBC B's OS and mucked around with its interrupts in high school and some 6800 during university, and did a tiny bit of 386 inline assembly for loop optimisation of Gouraud shading parts of my MSc in 1992, so it's not quite a new language, but it's one I haven't used in a few years.

It seems that there is more assembly around these days - some marginally disguised such as C++ SIMD intrinsics, but quite a bit of the lock-free code I've been seeing is assembly. Perhaps I'm just looking at more parallel and concurrent code. It seems the high level languages we currently have don't have abstractions which match multi core architectures, and so we have to make our own low-level libraries to map them to them.

There's also big gains in using GPUs for parallel computation, which is again getting closer to the hardware, though there are C and C++ tool kits for GPUs out there. The last few years I've looked at languages which have been higher - Python, Common Lisp, ML, JavaScript. Fortress stands somewhere else - it's a very high level language, but very careful to have a rich enough type system to allow a strongly optimising compiler for parallel hardware. I still don't believe that type systems have to be static to be optimised, but they do need to be strong.

Targeting these new architectures, the PeakStream platform looks interesting - though it seems to only target workstation GPUs, and may not be a platform (which I take as synonymous with virtual machine) but rather a set of GPU optimised libraries. As my only home machine is a laptop without I don't have machines which have its target GPU or one for NVIDIA CUDA. The Sh shader metaprogramming library also looks like the sort of thing that may be useful - it uses C++ operator overloading, much like Boost lambda or RealLib to generate an AST of the expression, which is then calculated on either a CPU or a GPU backend (RealLib expressions are calculated using the CPU, sometimes with SIMD intrinsics, to required precision). I'm interested as to how Fortress' parallel for might get implemented on a GPU, and whether it's still necessary to detect glitches in gamer rather than workstation class GPUs - traditionally the gamer class GPU's ran hotter so weren't guaranteed to produce accurate results all the time, but there's no mention of that in recent generations.



This post is delayed from Thursday as I was off in York, finding a venue for my wedding reception. My lovely lass stayed with me for a fortnight, but now she's back in Glasgow :(.


TME

Labels: , , ,

0 Comments:

Post a Comment

<< Home