Wednesday, September 1, 2021

Test Post

 It's been 10 years since I posted anything here (basically since I left the blogger.com team at Google).

Curious to see how well this still works. And...it still sucks for posting source code examples. Oh well.

Tuesday, July 24, 2012

Scripting Languages for Games - the good, the bad, and the appalling

Having worked on a lot of games, I've also encountered and worked with a lot of scripting languages embedded within games. Here's a list of a few of the ones that I have worked with:

  • SAGA and SAGA 2 (Scripts for Animated Graphic Adventures), my own language which I created for Inherit the Earth and Faery Tale Adventure 2.
  • Lua, a lightweight general purpose language for embedding, used in many games including World of Warcraft
  • Edith, the scripting language used for The Sims and The Sims 2.
  • Swarm, a particle system language developed by Andrew Willmott at Electronic Arts, used in Sim City 4, Sims 2, Spore, and many other EA games.
  • Edibles, a scripting language created by PostLinear Entertainment (1998).
  • UnrealScript, the scripting language for the Unreal game engine.
Some of these languages were a pleasure to work with - tight, well designed languages that provided clear benefits to the game development process. Lua and Swarm are both shining examples of this.

On the other hand, some of these languages were not so wonderful - poorly designed and cumbersome to work with.

And then there is a third category - cases where the game developers shouldn't have used a scripting language at all. The Sim City 4 city advisor system comes to mind - there was no reason to use a general purpose scripting language for this, the advisor messages could have been stored as text string resources with far less complexity.

Imperative Scripting Languages


Another observation is that most of these languages aren't terribly innovative as languages. With the exception of Swarm, all of the above languages are fairly standard imperative programming languages, not much different from BASIC, Pascal or JavaScript.

Take Edith for example. Edith's main innovation was that it was a "graphical" programming language, where game content designers could place functional blocks on the page and wire them together. Semantically, however, these diagrams were nothing more than flowcharts - one of the least expressive forms of programming known to computer science. As one developer put it "Edith is basically Assembly language with the user interface of LEGO Mindstorms".

UnrealScript had a built-in "state machine" control flow construct - but the same thing could have been done using a conventional switch statement with almost the same number of lines of code.

Lua, which is the most general purpose language on the list, also has the largest number of innovative language features - for example, Lua supports continuations, which is a powerful way to deal with asynchronous processes. But for the most part, there's not much difference between what you can do in Lua and what you can do in Python.

In general, there is very little that can be expressed in these languages that cannot be expressed in C++ or Java or Python just as easily. Which leads to the obvious question - what's the benefit of using these languages instead of coding the game logic in C++? Why go through all the trouble of embedding a language in the first place?

The answer lies not in the language syntax, but rather in it's compilation and runtime environment. These embedded languages all have three major advantages over C++:
  • Compilation is very fast.
  • Individual modules can be recompiled without recompiling and re-linking all other modules.
  • Recompiled modules can be dynamically loaded into a running version of the game without having to restart it.
This means that the iterative development cycle is extremely fast. Instead of waiting 5 minutes for the program to compile and link - which is the experience of the typical C++ developer - the programmer using these script languages can have the new behavior manifest within the game within seconds after they hit "save" in their editor.

Another benefit is that the complexity of these languages is often much lower than the language that the game engine is written in, so that game companies can hire less skilled (and less expensive) staff to create the game content with a minimum of training. This means that the programming staff for a game tends to be divided into two separate "castes" - software engineers and "scripters".

Although one might be tempted to think of scripters as being on a somewhat lower level than the software engineers, in many cases the job of a scripter was just as challenging and demanding as any. For example, one of my experiences working on Faery Tale 2 was that scripters would discover bugs in the underlying game engine, and being impatient with the slow pace of bug fixes, would write scripts that were essentially workarounds for those bugs.

Non-imperative languages: Swarm


Now I want to talk a bit about Swarm, because it is so different from all the others. Swarm is not an imperative language - there is no flow of control, no "if" statements or conditional branches, no "for" loops or iteration statements, and so on. However, even without these language constructs, it can be used to specify complex behavior.

Swarm is a declarative language for describing particle system. Particles are visual effects like sparks, smoke, water droplets, bullets, clouds, dust, and other entities that typically appear in large swarms with short lifetimes. A particle system consists of an emitter - an object that generates particles - and a particle specification, that is a description of the type of particles that the emitter generates.

An emitter will typically have properties such as emission rate (how fast particles are generated), a delay time before the first particle is emitted, a duration, and a specification for the initial trajectory of the particles. For example, you can have omnidirectional emitters that emit particles in all directions, cone-shaped emitters, square emitters, and so on.

Particles generated by a given emitter have a set of properties, such as:

  • color
  • transparency
  • size
  • velocity
  • duration (how long the particle lasts)
  • whether the particle as affected by gravity, wind, or other forces
  • whether the particle can collide with terrain, buildings, characters, or other particles
  • whether the particle is displayed using a 2D image or a 3D model
  • whether the particle's orientation is aligned with it's trajectory or with the player's view
...and so on. Virtually all of these properties can be specified as functions of time - such as a particle that starts out opaque and slowly becomes transparent. And virtually all of these properties can have a random component added to them as well.

In addition, particles have conditions which can trigger actions when a particular event occurs. For example, a particle representing a bomb or grenade could have a trigger condition when the particle collides with a surface or with the ground. The triggered action could be to create a new particle emitter representing the resulting explosion.

All of the emitters, particles, and trigger actions can be specified within the Swarm source files. Complex behaviors are specified by setting up chains of effects - a particle of type A might create an emitter of type B when a certain condition is reached, and the particles emitted by B might create a new particles of type C, and so on.

For example, one artist on Sim City 4 (Ocean Quigley) created a complex alien invasion script with a giant robot descending from the skies on a blast of rocket jets who then attacked the city with flying discs of destruction, and then blasted back off into space - and all of this was done with a single Swarm script, with no changes to the game engine!

Another impressive demo (which, alas, did not make it into the shipped game) consisted of a water erosion simulation. Andrew had added a new particle effect type which would cause a small depression in the terrain whenever it was struct by that particle. He could then create emitters which represented water sources emitting droplets of water, which would then flow downhill, eroding gullies and channels in the terrain.

If you think about particles as processes, then Swarm is a language of implicit parallelism - computation consists of lots of simple actions and behaviors going on simultaneously and asynchronously. And yet the language itself was so simple that it only took a few hours for a game artist to master it. In particular, artists did not need to learn the complex concepts of flow-of-control, recursion, iteration, and other features of imperative programming languages.

Another cool feature of Swarm was the way it handled reloading of source files: The Swarm interpreter would use the filesystem API to be notified whenever any files in the source directory changed. So all an artist had to do was hit Ctrl-S in their text editor, and immediately see their changes take effect. Typically artists would arrange their desktop so that the text editor and the game window were both visible side-by-side.

Alternatives to scripting languages


Scripting languages are often used to specify the behavior of objects within the game. However, an alternative which has frequently proved successful is to treat objects as bundles of properties, where the presence or value of a property influences the object's behavior. This means that instead of editing text files, the game developer uses a property editor - a UI widget that allows them to set the values of these properties from within the game.

Typically a property editor widget is generic and is driven by a schema definition. The schema definition contains a description of all of the properties a given object is allowed to have, and what the types and allowable values of those properties are. The type of a property determines what type of editor to use to edit that property - so a "string" property will generate a text input box, while a "color" property will generate a color picker widget.

An example of this is the building definitions in Sim City 4. There were many hundreds of different types of buildings, and each one had many dozens of properties such as zoning requirements, electricity consumption, flammability, and so on. The content producers who entered this data needed to have lots of domain knowledge about cities in order to create a city simulation that was appropriately believable.

Of course, the interpretation of these properties - such as how the flammability property influences the chance of the building catching fire, and how quickly the fire would spread - were coded in C++ as part of the game engine. But from a game production standpoint, the benefits of using properties was the same kind of rapid development afforded by scripting languages, but with far less complexity.

Wednesday, July 11, 2012

Tart still on hiatus, but not dead

I've been greatly enjoying working on my Android game project lately - here are some screen shots of my most recent progress: https://plus.google.com/102501415818011616468/posts/51UaQviTyfM

Naturally this means that I haven't been doing any active coding on Tart for the past 8 months or so. However, that's not necessarily a bad thing. You see, even though I'm not actively working on it, it's still percolating in my subconscious mind, and occasionally an idea pops out. I've often found that some problems are best solved by stepping back and letting go for a while - sometimes the focus on solving a particularly vexing problem causes a kind of tunnel vision, while stepping back and getting some distance from the problem can lead to a new perspective on how to solve it - or even how to avoid it being a problem at all.

I thought I would take some time to jot down some of the issues that I have been thinking about.

I think that there are some aspects of Tart for which I never quite had a clear design idea. That is, there were certain parts of the language design where I figured that I would fill in the blanks at a later time, and looking back on the project I realize that those areas never did get filled in properly. Worse, I realize that some of those design aspects should have been designed up front rather than being put off until later, because the design choices affect the compiler's output in a fairly fundamental way.

1) Dynamic Linking

An example of this is dynamic linking of modules. Being able to divide an application into separately loadable chunks is fairly fundamental these days. Of course, at the lowest level we know how this is all supposed to work - the semantics of shared libraries and dynamic linking are well understood. But high-level languages need something more than just stitching together of linker symbols. They need to be able to do method calls across module boundaries, which means that the class definitions in the various modules must be interoperable. If two modules both import references to a particular class, those two class definitions must be combined somehow - otherwise you end up with two distinct classes that happen to have the same name, and instances of those classes will no longer be comparable, even though they may have identical values. 

In the past, the Tart compiler has generated a large quantity of static metadata for each class (dispatch tables, reflection data, GC stack maps, etc.), and relied on LLVM's linker to combine identical definitions. This is fine for static linking, but doesn't address the problem of dynamic linking at all. To address the problem of dynamic linking, a different approach is needed.

My current thinking is to replace a lot of the statically defined data structures with one or more global registries. For example, you would have a global registry of types. Whenever a module is loaded which contains a reference to a type, it would look it up in the registry (by name if it's a named type, by type expression otherwise), and if the type exists then the module would use the returned value as the type pointer. If the type does not exist, then the module will contain a static string that is a serialized description of the type that can be used to construct the actual type object. In other words, the first module that uses a given type gets to set the definition of that type, and all other modules will use that definition.

Note that the serialized description of the type could potentially be smaller than the instantiated type object, since there would be no need to store NULL pointers, and the serialized information could be compressed.

Of course, this can lead to problems if two modules have incompatible definitions of a type. For example, two modules might not agree on the length or structural layout of the type. To solve this problem, we need to be able to check if two type definitions are compatible. A strict check is fairly easy - see if the definitions are identical. But strict checks lead to brittleness and versioning hell - ideally you want some flex in the system so that a dependent module can be modified without recompiling the world.

A very flexible approach involves constructing the class dispatch tables at runtime instead of at compile time. This allows you to add or delete methods of a class and still remain compatible with old code. This works because the method offset into the dispatch table is no longer fixed at compile time, but calculated at runtime, so if the dispatch table slots change (because methods are added or removed) it won't break older code.

One can go even further and use a dispatch map rather than an array, similar to what Objective-C does: For each method signature, you generate (at runtime) a selector ID. Calling a method involves looking up that selector ID in a hash map, which returns the actual method pointer. While this does cause some additional overhead for certain kinds of method calls, other types of calls (such as calls through an interface) might potentially be sped up.

2) Debugging

Support for DWARF has been the bane of my existence, and has caused me more frustration than any other aspect of working on Tart. And to add insult to injury, almost all of the DWARF information is duplicative - that is, it's simply repeating information that's already present in the reflection data (such as a description of fields and methods of a class), but encoded differently.

I recently learned, however, that lldb (the LLVM debugger project) can potentially support custom debugging formats. That is, it would not be impossible to write an extension to lldb that directly parses the reflection data of a module instead of using DWARF.

This does mean, however, that source-level debugging would only work with lldb, and not gdb. That may be too much of a price to pay - for one thing, I like the Eclipse integration with gdb, and not being able to debug in Eclipse would be sad.

3) The name "Tart"

I've never been completely satisfied with the name "Tart", but unfortunately all the good names are taken. However, I'm concerned about possible confusion or conflation with "Dart", which is yet another language that Google is trying to promote.

Anyway, this is not a complete list of all the things I have been thinking about, but its late, and I shall leave the rest for another time.

--
-- Talin

Sunday, December 25, 2011

Status update

It's been a while since I've posted on my progress. I'm still in the "taking a break from Tart" mode, which is why you haven't heard from me lately. I plan to get back to working on it after the holiday.

In the mean time, I've been busy working on Mint, and things are going pretty well. Mint can not only build itself directly, but can also generate makefiles to build itself indirectly. The Mint source tree now has a "build.osx" directory which contains both the Makefile and header files needed for building Mint on OS X. The contents of this directory are generated entirely by Mint. There's also a "build.linux" directory which will (soon) have the same setup for Linux platforms.

One of my design goals has been to make Mint useful for developers, even if their users don't have Mint installed on their systems. It would be hard to convince developers to use any new tool if it required all of their users to install that tool as well. Of course, Mint would be even more useful if the users had it installed, but first thing is to get some sort of momentum in the developer community.

To that end, I've made it so that the Makefiles generated by Mint are position-independent, so that you can check them into the source tree along with your source files. In other words, a developer can use Mint on their personal workstation, and then generate Makefiles which can be invoked by the users who check out their sources, without those users having to install Mint themselves.

Unfortunately, most projects need more than just Makefiles to build - they also need configuration scripts. For building on a foreign system, one can generally assume that something like 'make' exists, which means that one can use that as a common foundation. With configuration, however, that foundation is far less firm. There's autoconf, of course, which is built up out of shell scripts and m4 macros, but while most Unix-like systems have some version of autoconf installed, they may not have the version of autoconf that you need.

I haven't quite figured out what to do about this, but there are a couple of different paths that could be explored. One idea would be to have Mint generate autoconf-compatible scripts; another would be to generate my own configuration scripts that were not autoconf-based. A third approach would be to not do configuration at all, but instead ship pre-configured Makefiles / build scripts for the most popular platforms.

Ultimately, what I suspect developers want is a tool that will let them generate build scripts that will run on all of the platforms that they care about, while requiring the fewest number of additional tools or dependencies. So for example, it would be nice to press a button and get a Visual Studio project file, an XCode project, a Makefile for various platforms, and so on. The downside is that these pre-generated configurations can't cover every possible host environment, and so have to be configured for a least-common-denominator target environment.

Of course, all of this is based on some assumptions about the users. 99% of users don't want to build from source in the first place - they want to install pre-compiled binaries. And a developer working alone doesn't need a fancy configuration and build system, when they can just add a new source file in their IDE and hit the rebuild button. The use cases where one checks out source and rebuilds it are fairly limited by comparison:
  • The case where a team of developers is working on a single project.
  • The case of a software packager who is creating a binary distribution.
  • The case where a project doesn't yet have an installation script or binary packages, so building from source is the only option for users.
For the first two cases, having to build and install Mint first is not that huge of a burden. For the last case, it makes me think that the really compelling use case for Mint is in the creation of installation packages. That is, if I could figure out how to get Mint to automatically generate a .deb or .rpm or a .dpkg, that would be something that I think developers would really be interested in. Heck, that's something I myself would be interested in, for Tart...hmmm, time to start reading the docs on building packages.

Here are some links:

Monday, November 28, 2011

Mint progress

As I may have already mentioned, I decided to take a break from working on Tart, and spend a few weeks working on Mint. This should by no means be interpreted as a lack of interest or commitment in Tart. It's just that there are a number of difficult problems which have caused work on Tart to proceed with frustrating slowness. Most of these problems involve the complexities of generating DWARF debug info via LLVM's APIs, problems which I've already written about extensively and see no need to repeat here.

In any case, I decided to give myself a holiday and work on something fun. And in fact the progress on Mint has been nothing short of astounding - I think I've written several hundred source files over the course of the last two weeks. (I've also been using Eclipse + EGit to do the coding, which has been a remarkably pleasant experience.)

Those who want a quick overview of what Mint is can look here:


(Click the 'code' link at the top of that page if you want to browse the source.)

Of course, part of why the work on Mint is proceeding so rapidly is that unlike Tart, Mint has virtually no dependencies - it is a self-contained program that doesn't depend on complex libraries or other open-source projects in order to work.

At the present time, the "mint" program doesn't actually do anything useful. But it does do a number of things that are interesting.

For example, Mint is able to run simple 'configure' tests such as testing for the presence of a standard header file such as <malloc.h>. It does this by invoking the C preprocessor and then looking at the exit code. Note that these configuration tests are in no way "built in" - instead they are defined in the standard Mint library.

A Python-inspired 'import' statement is used to import the configuration tests into your project's build file:

from prelude:configtests import check_include_file, check_include_file_cpp

HAVE_STDIO_H   = check_include_file { header = "stdio.h" }
HAVE_STDBOOL_H = check_include_file { header = "stdbool.h" }
HAVE_STDDEF_H  = check_include_file { header = "stddef.h" }
HAVE_STDLIB_H  = check_include_file { header = "stdlib.h" }
HAVE_CPP_ALGORITHM  = check_include_file_cpp { header = "algorithm" }

The syntax "projectname:modulename" syntax in the import statement is used to import definitions from another source tree - this is how Mint supports multiple source trees in a single build configuration. The name "prelude" is a special pseudo-project that points to the location of the Mint standard library.

The Mint language tries to be as declarative as possible, describing the end result that you want to have accomplished, while hiding from view the steps needed to get to that end result. Although Mint does have functions, most actions are described by objects rather than function calls. The typical pattern in Mint is that instead of calling a function with N parameters, you would commonly define an object with N named properties, and then 'evaluate' that object to get a list of actions to perform.

The advantage of this approach is that we can take the graph of objects and interpret it in different ways. If we want to actually build an executable program we would evaluate the object graph into a set of runnable actions for invoking compilers or other tools. On the other hand, if we want to generate an Eclipse or XCode project file, we would translate the object graph into an XML file that could be read into those programs.

Going back to the above example, the "check_include_file" symbol refers to a prototype object. In Mint syntax, an expression of the form <prototype> { <properties...> } creates a new object with the given prototype. In this case, we're creating an object that inherits from the 'check_include_file' prototype, and specifying the name of the header file to test for.

The 'configtests' module in the Mint standard library contains the definition of 'check_include_file'. Here's the actual definition (with comments stripped for brevity):

exit_status_test = object {
  lazy param message : string = ""
  param program : string = undefined
  param args : list[string] = []
  lazy param input : string = undefined
  export lazy param value : bool =
      console.status(message)
      or shell(program, args ++ ["2>&1 > /dev/null"], input).status == 0
}

check_include_file = exit_status_test {
  param header : string = undefined
  message = "Checking for C header file ${header}..."
  program = "cpp"
  args    = ["-xc"]
  input   = "#include <${header}>\n"
}

Here's a blow-by-blow explanation of this code:

"exit_status_test" is a generic prototype that performs configuration tests by running some shell program and checking the exit status. "check_include_file" inherits from, and further specializes, this prototype.

A "param" declaration defines an object property. You can only assign values to properties that have been defined in a prototype - this is to avoid spelling errors.

A "lazy param" is one that is evaluated dynamically. That means that the value of the parameter is calculated each time the parameter is accessed (the normal behavior is to evaluate the value of the parameter at the time the value is assigned.) You can think of lazy params as equivalent to no-argument member functions.

Understanding lazy parameters requires a careful understanding of Mint's scoping rules. First, Mint uses lexical scoping, like most modern languages. Mint also uses object-oriented-style lookup rules, meaning that properties defined on an object take precedence over properties defined on that object's prototype. So for example in the code sample above, we can see that the property 'header' is used in the calculation of the 'message' property of 'check_include_file'. Thus, if you tried to evaluate 'HAVE_STDIO_H.message', you'd get the value 'Checking for C header file stdio.h...". Even though 'message' is a property of 'check_include_file', when it gets evaluated the value of 'header' is taken from HAVE_STDIO_H, not 'check_include_file'.

An "export" parameter is one that should be saved when writing out the build configuration. So for example, we want to save the result of our configuration test (so that we don't have to run it every time), thus the 'value' property is declared as 'export'.

The 'program' and 'args' parameters are the name of the program to run, and the arguments to pass to it, respectively. In the check_include_file prototype, we see that the program to run is "cpp" (The C preprocessor) and the argument is "-xc" (which forces C dialect, as opposed to C++ or Objective-C).

The "input" argument is used to specify text that is to be piped into the program's standard input. In this case, it's a single line which contains a #include of the header we're interested in.

Finally, the 'value' property of 'exit_status_test' is where all of the actual work is done: It first prints a message to the console, and then executes a shell command using the built-in 'shell' function. (Ignore the 'or' in there - that's a temporary hack due to the fact that there's no statement blocks implemented yet.) The 'shell' function returns an object, which has several properties, one of which is the exit status.

If you've gotten this far, I think you'll agree that the language is quite flexible, and that it should be relatively straightforward to create configuration tests of almost any imaginable sort.

I'll show one more code example from the Mint standard library. Although Mint does not yet have the ability to do anything useful with build targets, it does at least allow build targets to be declared. The following is the (unfinished) prototype for build targets that generate executables, and what's notable about it that it can figure out from the source file extension what compiler to invoke. Moreover, this map of file extensions can be extended by objects that inherit from this prototype, making it possible to support other languages.

object_builder = builder {
  param cflags : list[string] = []
  param cxxflags : list[string] = []
  param include_dirs : list[string] = []
  param library_dirs : list[string] = []
  param warnings_as_errors = false
  param builder_map : dict[string, builder] = {
    "c"   = c_builder,
    "cpp" = cpp_builder,
    "cxx" = cpp_builder,
    "cc"  = cpp_builder,
    "m"   = objective_c_builder,
    "mm"  = objective_cpp_builder,
    "h"   = null_builder,
    "hpp" = null_builder,
    "hxx" = null_builder,
    "lib" = identity_builder,
    "a"   = identity_builder,
    "o"   = identity_builder,
  }
  export lazy param implicit_depends : list[builder] = sources.map(
      src => builder_map[path.ext(src)] {
        sources = [ src ]
      })
  actions = implicit_depends.map(b => b.actions)
}

The interesting bit is in the calculation of the 'implicit_depends' property. This uses a Mint lambda function. The syntax for lambdas is "(args) => expr" or just "arg => expr" if there's only one argument. In this case, we're running a 'map' over the list of source files, and for each source file, we get the file extension, look it up in the dictionary, and return an object prototype. We then use that prototype to generate a new object using the "proto {}" syntax. So basically this generates a new builder object for each source file.

Sunday, October 30, 2011

Thoughts on a build configuration language

Work on Tart is on hold for the moment while I am waiting to hear back from the LLVM folks about a bug in llc.

In the mean time, I want to write down a few notes about "mint", which is the name of the hypothetical future build tool that I want to create. Although my plan is to write mint in Tart, there's nothing Tart-specific about it, and it could be written either in C++ or Python or some other language.

I've been thinking about this a lot lately, especially because there's a big discussion on llvm-dev about the future of the build system, and part of that includes some very passionate opinions about the pros and cons of cmake.

Anyway, here are some basic requirements for mint:
  • It should be fast.
  • It should be extensible, especially with respect to 3rd-party tools. One should be able to write an extension module for yacc or antlr that makes it as easy to invoke those tools as it is to invoke gcc.
  • It should support hermetic builds, so that a particular binary can be produced on a given host machine that is identical to one produced on another machine, even if those machines have different environments.
  • It should support generating project files for popular IDEs such as Eclipse, XCode, Visual Studio, and so on.
I'd also like it to support automatic download of dependent packages, but unfortunately this is an area which I don't have a lot of experience in.

So the basic idea is that the build configuration language (BCL) is a hybrid functional / declarative language that supports prototype inheritance.

The language is declarative in the sense that what it primarily does is describe a graph of objects, the Build Dependency Graph (BDG). Objects in the graph include targets, files, tools, actions, and other entities which are used in the build. Making the language declarative means that a build configuration file describes the relationships between these objects, but doesn't say how these relationships are processed or in what order - that is determined by whatever tool is actually doing the build. This is what makes it feasible to generate a project file for an IDE.

The language is functional in that an object's properties can be expressions which are lazily evaluated. For example, the set of flags passed to a compiler is calculated from a number of different inputs, including the user's choices as to whether to enable debug symbols, assertions, and so on.

Prototype inheritance allows for objects in the graph to be re-used. For example, when you declare a build target, you start by inheriting from an abstract build target prototype, and then fill in the details about your specific target. The abstract build target prototype knows about source files, output files, and dependent targets; your target fills in the details of what specific files are involved. Or you may choose to inherit from a build target prototype that knows how to invoke the C++ compiler, which saves you the trouble of having to write the command-line flags for the compiler.

Because prototype inheritance is used so much in this language, the syntax is very simple:

<prototype> { <properties> }

Where 'prototype' is some previously defined object, and 'properties' is a list of key/value pairs. So for example:

myProgram = program.cpp {
  sources = [ "main.cpp" ]
}

Here we are starting with the standard prototype "program.cpp", and filling in the 'sources' slot ('sources' is defined in the prototype, but it's empty). We are then assigning that to the name 'myProgram'.

We can then, if we wish, create a variation of 'myProgram' that is compiled with optimization:

myProgramOpt = myProgram {
  opt.level = 2
}

The "opt.level" property was also defined in the program.cpp prototype. The program.cpp prototype has a tool definition for the cpp compiler. Here's a glimpse of what the program.cpp prototype might look like, with many details omitted.

program.cpp = program.base {
  opt = dict {
    level = 0
  }
  cflags = []
  tool = compiler.cpp {
    flags = [
      match (opt.level,
        0 => "-O0",
        1 => "-O1",
        2 => "-O2",
        3 => "-O3",
        * => raise("Invalid optimization level: " ++ opt.level))
    ] ++ cflags
  }
}

The first few lines define a property called 'opt' whose value is a dictionary type - basically just a namespace. Within that is the property 'level', which defaults to zero. (I'm undecided as to whether 'level' should have a type as well as a default value - it wouldn't make sense to assign a string to 'level'.)

Next we have the tool that will get run when this target gets updated. The "flags" property is a list of flags that gets passed to the compiler on the command line. When the compiler is invoked, each element of 'flags' will get evaluated and transformed into a string. That string is then passed to the compiler as the shell arguments.

In the case of opt.level, we use a match expression to transform the optimization level numbers 0..3 into compiler flags of the form -On. The idea here is to abstract away the specific syntax of the compiler's command-line arguments, so that the same syntax can be used to control different compilers.

There is also a 'cflags' property that can be overridden by descendant objects, allowing flags specific to a particular compiler to be passed in.

The main idea here is that "builtin" compilers for C and C++ use the same descriptive syntax as for 3rd-party extension compilers. In other words, the built-in rules aren't special.

Another kind of prototype is used to define the command-line options to the build script itself:

enable-asserts = option {
  type = bool
  default = false
  help = "Whether to enable assertion checks"
}

This means that the user can type "mint --enable-assertions" to generate a build configuration with assertions enabled. Or even type "mint --help" to print out the various options available.

BTW, one thing that would be neat is if "--enable-assertions" could change just that one option, while leaving other options set to their current values.

Anyway, I'd be curious to know what people think of this idea.

Tuesday, October 18, 2011

JIT / AOT Hybrids

Something I've been pondering a lot lately is the notion of link-time code generation.

The Tart compiler generates a lot of synthetic data and code, compared to a language like C++. This includes reflection tables, stack maps for garbage collection, type adapters, and so on.

What I've been wondering is whether it would make sense to do this work in the linker rather than in the compiler. There's even a precedent for this: there's one data structure which is already generated at link time, which is the list of static roots - global data structures that need to be traced in the collector.

There's a variety of things that could be generated at link time, ranging from the "only slightly ambitious" such as interface dispatch methods, to the "wildly ambitious" such as generating DWARF debugging info or even instantiating templates.

In order to do any of this, the linker would need much more information about high-level data structures and types. Ironically, this information - basically a serialized form of the AST - is already generated by the compiler and stored in the module. This information isn't currently used by the linker - instead, it's used by the compiler to allow already-compiled module files to be imported, without having to have the original source files.

If we take this idea to it's logical extreme, what we end up with is something like a Java class file that's compiled ahead of time (AOT), instead of just in time (JIT). That is, you have a single, comprehensive encoding of all of the types and methods in the module, and this encoded information is then used to produce the reflection tables, tracing tables, dispatch tables, and DWARF debug info. (As opposed to what it does now, which is to generate all of these tables separately, at compile time, and then glue them together in the linker.)

This whole approach basically blurs the line between AOT and JIT compilation, in that the inputs to the linker look a lot like the inputs to a VM.

There's some problems to solve however.

1) One problem with using the serialized AST is that lacks a lot of information, since it hasn't been fully analyzed. Type names in the AST are just names - we don't know yet which type is the one being referred to. Similarly with methods and variables, all we know is the name at this point. The largest part of the compiler is the part that analyzes the AST and converts it into a CFG (Control flow graph) in which all types are resolved, template parameters expanded, and so on. We don't want all of this functionality to live in the linker - instead, we want the compiler to do the analysis and then pass to the linker the results of that analysis.

So the first problem to be solved is to come up with a way to serialize the CFG rather than an AST.

2) A second problem has to do with template instantiation. Currently templates are not analyzed until after they have been instantiated. The reason for this is that until they are analyzed, templates don't have enough information to be able to do a successful analysis.

As a simple example, take the max() function whose declaration is "def max[%T](a:T, b:T) -> T". This function requires that it's template parameter, T, have a less-than operator defined for it. If there's no less than operator, the template instantiation fails. Until we actually attempt to instantiate the template and bind some type to T, we don't know if there's a less than operator defined or not. That's why templates are written out as un-analyzed ASTs, rather than fully-analyzed CFGs.

To solve this problem, we would need to change the way template parameters work. In order to do any operation on a variable whose type is a template parameter, you would have to declare the template parameter as supporting that operation.

Say we have a template parameter T, and we have some value 'a' whose type is T. Now we want to call 'a.toString()'. In the current system, we don't know whether a has a toString() method or not, but if we attempt to instantiate the template with some type that doesn't have a toString() method, then an error will occur. In the new system, in order to call a.toString(), you would have to declare template parameter T as [%T <: HasToString], where 'HasToString' is a protocol type that includes a 'toString()' method. In other words, having a 'toString()' method is now an explicit requirement for parameter T, any attempt to instantiate the template with a type that doesn't have a toString() method will fail early.

(FYI - in the literature on type theory, what I have described corresponds to "parametric polymorphism" whereas the way things work now is "ad-hoc polymorphism".)

Unfortunately, Tart's protocol system only works on instance methods, not on regular functions. Right now there's no way to express the concept "match any type that has a less-than operator defined for it". In other words, what we would need is some way to say "given a type T, return true if there is a function 'def infixLessThan(:T, :T) -> bool' defined somewhere." At the moment, I can't even imagine what that syntax would look like. But for now, let's assume we come up with some way to do this.

By pre-declaring what methods we need, we can do the analysis passes on the template and create a CFG. This CFG won't actually be complete, because some types are still not known. But we can at least do enough analysis to be able to write the CFG out to a file.

3) Currently, you can link Tart programs in two ways: Either by using the Tart linker, 'tartln', or by using the standard LLVM tools (llvm-ld, opt, and llc). If using the latter, however, you need to load in the Tart-specific plugins for reflection and garbage collection.

In order to do the link-time code generation that I'm describing, we would need to put all of the code to do that into the shared library, so that it can be loaded as a plugin as well.