Take a look at the BorderWatch main method (or view the whole module for context):
[source]
void main()
{
initArthur(AppName, OrgName);
scope(exit) termArthur();
auto console = createConsole(ConsoleData(AppName));
menuScreen(console);
}
[/source]
The Console is the means of displaying the ASCII graphics to the user. Initially, there was only one kind, a "heavyweight" console that represents a window on the OS. But after realizing how annoyingly awkward it was to position things appropriately with my naive implementation of clipping, I came to the conculsion it would be much nicer to have another type, "virtual consoles", that maintain their own coordinate space.
At the heart of both types of Console is an array of Symbols (an ASCII character and RGBA values). A virtual console's buffer is a subrect of its parent's buffer. Every time you print to a console, it is marked as dirty. When you call the render method on a console, it first looks to see if any of its children are dirty. If they are, it copies the children's symbols into the proper region of its own buffer.
In C, this sort of operation would most likely be accomplished with a loop and a memcpy, copying entire rows at a time. My D implementation is done similarly, but instead of memcpy, I use array slices. Here's the (uncommented) code from console.d that does the work:
[source]
void render()
{
foreach(c; _children)
{
if(c._dirty)
{
uint dstStart = c.x + (c.y * columns);
uint dstEnd = dstStart + c.columns;
uint srcStart, srcEnd;
for(uint i=0; i {
srcStart = i*c.columns;
srcEnd = srcStart + c.columns;
// Here's the slicing...
// The symbols from one row of the child's buffer are
// copied to one row of the destination buffer.
_symBuffer[dstStart .. dstEnd] = c._symBuffer[srcStart .. srcEnd];
dstStart += columns;
dstEnd = dstStart + c.columns;
}
_dirty = true;
c._dirty = false;
}
}
}
[/source]
If you haven't yet read the article I linked above, a quick explanation. _symBuffer[dstStart .. dstEnd] takes a 'slice' of the _symBuffer array, starting from the index indicated by dstStart (inclusive) and ending at the index indicated by dstEnd (exclusive). That slice is then assigned all of the values contained in the slice of the child's buffer that is taken from srcStart to srcEnd. There's no need for pointer arithmetic, no chance of overwriting memory, no need to worry about allocations or deallocations... it's all safe and convenient.
Another use I had for array slices, in the same file, is in the following method:
[source]
void fill(ubyte c, ubyte r, ubyte g, ubyte b, ubyte a = 255)
{
auto symbol = Symbol(c, r, g, b, a);
// Here's the slice...
_symBuffer[] = symbol;
}
[/source]
Here, I'm taking a single symbol and using the slice syntax to assign it to the entire array.
These are seemingly farily trivial things, but I can tell you that it makes a big difference. I've been using C for many years and, though I've been frustrated from time to time, I've never actually hated it. But the more I use D, the more I miss the little things like this when I go back to my C codebase. It almost makes me not want to go back at all.