Ask HN: Makefile for Command Runner?

snicker7 | 4 points

If I need any make like tasks get run I am quite happy with https://github.com/melezhik/Tomtit

It's written on Raku and could be extendable by a lot of plugins - http://repo.westus.cloudapp.azure.com/hub/

melezhik | 4 years ago

Not really the tool for the job, for anything non-trivial (multiline) you end up with \'s (there are some hacks around that), and shell variables need doubling, so

    run:
         for name in a b c ; do \
            some-command $$name \
         done
better off writing that in a proper shell script and then

    run:
        sh path/to/script.sh $(SOME) $(ARGS)
IMHO
jjgreen | 4 years ago

Isn't the advantage of Make that it does dependency resolution: so you can tell it to "make a" and then it will run "b" and "c" because "a" depends on those?

PaulHoule | 4 years ago

make is for expressing a directed acyclic graph of transformations that produce output files from input files. this is a good fit for compiling libraries & applications and also other things like data processing pipelines where you need to define some repeatable process to transform and combine input data files to eventually produce some output, provided you encode a makefile that tells make about the dependency structure.

there is little point in using make if you're not taking advantage of its ability to understand dependencies of file driven workflows

an even more extreme build tool in the spirit of make is tup: http://gittup.org/tup/

tup instruments filesystem access to monitor what your build rules do while tup is executing a build. this means that tup can detect if a build rule reads from or writes to a file that isn't captured as a dependency in the tupfile (makefile equivalent). tup regards this as an error-- you're meant to update the tupfile to capture all the file dependencies so that tup-driven builds can be even faster and even more accurate.

I once spent too much time thinking about build systems, and tried to express the computation of executing an automated test suite for a monorepo python codebase with tup. The goal was incremental test: so that if you updated file a the build system would detect that libraries b, c and e had transitive dependencies on a and would run the corresponding subset of tests for a, b, c, e instead of running the full set of tests. If you rig a test runner to produce a test report file, you can regard testing as a function that maps an input test program to an output test report file, which can be modelled by make and tup. So far so good. Things start to get complicated as the behaviour of running a python script depends on all the other python files and libraries it dynamically imports at runtime, so for accurate incremental test you would need to map out all those runtime dependencies on other files and tell them to the build tool. It also got a bit more perverse with tup as by default the python interpreter generates pyc bytecode files and craps them all over the filesystem when executing scripts, and tup's reaction to that is "you promised me in your tupfile that running this build rule would only output a test report, but look, it also produced a pyc file! You liar! Build error." - so then it turns into an exercise in how to configure python and wrap various python tools so they fit more into the regime of deterministic functions that consume files and write files.

shoo | 4 years ago