Discussion:
Experimental C Build System
Add Reply
bart
2024-01-29 16:03:45 UTC
Reply
Permalink
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.

This proposal comes under 'convenient' rather than 'automatic'. (I did
try an automatic scheme in the past, but that only worked for specially
written projects.)

Here, the method is straightforward: the necessary info is simply listed
in the designated lead module, within a set of #pragma lines.

For my go-to small project demo, which comprises the three source files
cipher.c, hmac.c, sha2.c, there are two ways to do it:

(1) Add the info to top of the lead module cipher.c:

#pragma module "hmac.c"
#pragma module "sha2.c"
....

I wasn't intending to actually implement it, but it didn't take long,
and it seems to work:

c:\cx>mcc cipher
Compiling cipher.c to cipher.exe
Adding module hmac.c
Adding module sha2.c

(2) Create an extra lead module and add it to the project.

This allows the scheme to be superimposed on an existing codebase
without having to modify it. If I try that on the above cipher project
in a new module demo.c, it will contain:

#pragma module "cipher.c"
#pragma module "hmac.c"
#pragma module "sha2.c"

It works like this (in the real version those "Adding" lines will be
silent):

c:\cx>mcc demo
Compiling demo.c to demo.exe
Adding module cipher.c
Adding module hmac.c
Adding module sha2.c

To get the original cipher.exe output needs an override option, but see
below.

Method (2) is attractive as it provides a means to easily set up
different configurations of an applications, but mix-and-matching modules.

Pragma Directives
-----------------

These are the ones I had in mind:

module "file.c" As used above. Possibly, wildcards can work here

import "file.c Incorporate a separate project which has its own
set of pragma directives

link "file.dll" Any binary libraries

header "file.h" Specify a program-wide shared header

Possibly the 'import' one can be dispensed with; it is simple enough to
manually copy and past the necessary info. However that means it is
listed in more than one place, and the original can change.

The idea of 'header' is to specify big headers (windows.h, sdl2.h, etc)
which are independent of anything else, which are then processed just
once in the compiler, rather than once for each module that includes
them. The usual '#include's are still needed.

(The intention is not to create a whole-program compiler, or to
introduce a module scheme, although this provides some of the benefits.
The C language is unchanged.)

Possibly, there can be a directive called 'name' to specify an
executable file name.

Working with Other Compilers
----------------------------

Clearly, my scheme will only work with a suitable modified compiler.
Without that, then I considered doing something like this, adding this
block to my example from (2):

#pragma module "cipher.c"
#pragma module "hmac.c"
#pragma module "sha2.c"

#ifndef __MCC__
#include "runcc.c"

int main(void) {
runcc(__FILE__);
}
#endif

When run a compiler that is not MCC, this builds a small program (here
still called demo.exe), which calls a function to read from this file,
process the relevant #pragma lines, and use that info to invoke a
conventional compiler.

I haven't tested it, but it would mean a two-step process that looks
something like this (possibly, it can pick up the name of the compiler
that /is/ used, and invoke that on the actual program):

c:\cx\tcc demo.c
c:\cx\demo
... invoke tcc to build cipher.c hmac.c sha2.c ...

(Tcc of course also has the -run option to save that second line)

For this to work, the pragma stuff must be cleanly written: the runcc()
function will only do basic string processing, it is not a C compiler.


Using a Makefile
----------------

One use-case for this would be if /I/ supplied a multi-module C program,
or packaged someone else's.

But people are mad about makefiles so, sure, I can also supply a 2-line
makefile to do the above.

Dependencies and Incremental Compilation
----------------------------------------

This project is not about that, and is for cases where compiling all
sources in one go is viable, or where a one-off build time is not relevant.

That can mean when using fast a compiler and/or the scale of the project
allows.

Although the 'header' directive will also help, in cases where the
application itself is small, but has dependencies on large complex
headers. (I haven't quite figured out how it might work though.)
Lawrence D'Oliveiro
2024-01-30 00:57:07 UTC
Reply
Permalink
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Chris M. Thomasson
2024-01-30 01:38:58 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Huh?
David Brown
2024-01-30 08:06:43 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages. (The later may also be true, but
it's the former that is relevant.)

For my own use at least, he's right. His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements. That's fine for such projects, and if that suits
his needs or the needs of others, great. But it would not cover more
than a tiny proportion of my projects over the decades - at least not
without extra help (extra commands, bash/bat files, etc.)
Chris M. Thomasson
2024-01-30 23:23:24 UTC
Reply
Permalink
Post by David Brown
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages.  (The later may also be true, but
it's the former that is relevant.)
For my own use at least, he's right.  His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements.
When you say his, you mean, Bart's system, right?
Post by David Brown
That's fine for such projects, and if that suits
his needs or the needs of others, great.  But it would not cover more
than a tiny proportion of my projects over the decades - at least not
without extra help (extra commands, bash/bat files, etc.)
David Brown
2024-01-31 07:36:36 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages.  (The later may also be true,
but it's the former that is relevant.)
For my own use at least, he's right.  His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements.
When you say his, you mean, Bart's system, right?
Yes.
Post by Chris M. Thomasson
Post by David Brown
That's fine for such projects, and if that suits his needs or the
needs of others, great.  But it would not cover more than a tiny
proportion of my projects over the decades - at least not without
extra help (extra commands, bash/bat files, etc.)
Chris M. Thomasson
2024-02-01 03:12:08 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages.  (The later may also be true,
but it's the former that is relevant.)
For my own use at least, he's right.  His system is aimed at being
simpler than make for C-only projects with limited and
straightforward build requirements.
When you say his, you mean, Bart's system, right?
Yes.
[...]

Ahhh, thanks David.
bart
2024-01-31 00:44:57 UTC
Reply
Permalink
Post by David Brown
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages.  (The later may also be true, but
it's the former that is relevant.)
For my own use at least, he's right.  His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements.  That's fine for such projects, and if that suits
his needs or the needs of others, great.  But it would not cover more
than a tiny proportion of my projects over the decades - at least not
without extra help (extra commands, bash/bat files, etc.)
It would cover most open source C projects that I have tried to build.
All the following examples came down to a list of C files to be
submitted to a compiler:

Lua
Seed7* (a version from 5 years ago)
Tcc*
PicoC
LibJPEG*
'BBX' (Malcolm's resource compiler)
A68G (An older version; current one is Linux-only)

The ones marked * I believe required some process first to synthesised
some essential header, eg. config.h, often only a few lines long. But
once out of the way, then yes it was just N C files to plough through.

Tcc also had other things going on (once tcc.exe was built, it was used
to prepare some libraries).

LibJPEG had more than one executable, which shared a lot of common
modules. The makefile put those into one .a file, which was then
included in all programs. But since it was statically linked, it did not
save space.

Once I knew what was going on, I just put the common modules in the list
for each program. Or /I/ could choose to put those into a shared library.

It's a question of extracting the easy parts of a project. Once I know
that, I can work my way around anything else and devise my own solutions.

--------------------

In terms of my own real applications, they involved compiled modules;
interpreted modules (that needed compiling to bytecode); processing
source files to derive/update message files for internationalisation;
packaging the numerous files into tidy containers; uploading to
distribution disks, or via FTP; scripts to generate the new index.html
for downloads...

I understand all that part of it. The necessary scripting is utterly
trivial. The above was a process to go through for each release. It
wasn't time-critical, and there were no dependencies to deal with. It
wasn't makefile territory.

The build system described in my OP is that needed to build one binary
file in one language, which is 95% of what I had trouble with in that
list above, /because/ the supplied build process revolved around
makefiles and configure scripts.
bart
2024-01-30 01:45:43 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.

C doesn't.

The proposal would allow a project to be built using:

cc file.c

instead of cc file.c file2.c .... lib1.dll lib2.dll ...,

or instead of having to provide a makefile or an @ filelist.

That is significant advance on what C compilers typically do.
Malcolm McLean
2024-01-30 04:46:10 UTC
Reply
Permalink
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
   cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
There's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course that
one function calls masses and masses of subroutines. Which should be
private to the module, but not to the source file for the
"answer_question" function.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
bart
2024-01-30 11:52:11 UTC
Reply
Permalink
Post by Malcolm McLean
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
    cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
There's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course that
one function calls masses and masses of subroutines. Which should be
private to the module, but not to the source file for the
"answer_question" function.
I'm not sure what that has to do with my proposal (which is not to add a
module scheme as I said).

I've now added wildcards to my test implementation. If I go to your
resource compiler project (which I call 'BBX') and add one small C file
called bbx.c containing:

#pragma module "*.c"
#pragma module "freetype/*.c"
#pragma module "samplerate/*.c"

then I can build it like this:

c:\bbx\src>mcc bbx
Compiling bbx.c to bbx.exe

The file provides also the name of the executable:

c:\bbx\src>bbx
The Baby X resource compiler v1.1
by Malcolm Mclean
....

Without this feature, building wasn't exactly onerous; I used an @ file
called 'bbx' which contained:

*.c freetype/*.c samplerate/*.c

and built using:

c:\bbx\src>mcc @bbx -out:bbx
Compiling 44 files to bbx.exe

But this requires an extra, non-C file (effectively a script), and a
special invocation (the @). The EXE name can be put in there was well,
but the option for that depends on compiler. (gcc can''t use this @ file
as it contains wildcards.)
Malcolm McLean
2024-01-30 16:50:17 UTC
Reply
Permalink
Post by bart
Post by Malcolm McLean
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means
of compiling assemblies of modules.
C doesn't.
    cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
 >
There's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course
that one function calls masses and masses of subroutines. Which should
be private to the module, but not to the source file for the
"answer_question" function.
I'm not sure what that has to do with my proposal (which is not to add a
module scheme as I said).
Oh you are not adding modules
Post by bart
I've now added wildcards to my test implementation. If I go to your
resource compiler project (which I call 'BBX') and add one small C file
    #pragma module "*.c"
    #pragma module "freetype/*.c"
    #pragma module "samplerate/*.c"
    c:\bbx\src>mcc bbx
    Compiling bbx.c to bbx.exe
    c:\bbx\src>bbx
    The Baby X resource compiler v1.1
    by Malcolm Mclean
    ....
    *.c freetype/*.c samplerate/*.c
   Compiling 44 files to bbx.exe
But this requires an extra, non-C file (effectively a script), and a
as it contains wildcards.)
So essentially we have path listing and description language.
Which ironically is what the resource compiler basically does. You put a
list of paths into an XML file, and it uses that to find the resources,
and merge them together on standard output (as text, of course :-) ).

You're doing the same, except that of course you have to compile and
link rather than decode and lightly pre-process.

But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
bart
2024-01-30 17:57:29 UTC
Reply
Permalink
Post by Malcolm McLean
Post by bart
Post by Malcolm McLean
There's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course
that one function calls masses and masses of subroutines. Which
should be private to the module, but not to the source file for the
"answer_question" function.
I'm not sure what that has to do with my proposal (which is not to add
a module scheme as I said).
Oh you are not adding modules
In my other language with modules, it specifically does not have a
hierarchy of modules. It causes all sorts of problems, since it's hard
to get away from cycles.

And sometimes you just want to split one module M into modules A and B;
there is no dominant one.

But it also means it doesn't do anything clever to determine the set of
modules comprising a project, starting from one module.

Some languages traverse a tree of import statements. In mine, I don't
have import statements at all littered across the program. There is just
a shopping list of modules started at the start the lead module.

That is the model I used for this C experiment.
Post by Malcolm McLean
Post by bart
I've now added wildcards to my test implementation. If I go to your
resource compiler project (which I call 'BBX') and add one small C
     #pragma module "*.c"
     #pragma module "freetype/*.c"
     #pragma module "samplerate/*.c"
     c:\bbx\src>mcc bbx
     Compiling bbx.c to bbx.exe
So essentially we have path listing and description language.
Which ironically is what the resource compiler basically does. You put a
list of paths into an XML file, and it uses that to find the resources,
and merge them together on standard output (as text, of course :-) ).
You're doing the same, except that of course you have to compile and
link rather than decode and lightly pre-process.
Yes, the requirement are very simple: it's a list of files! The same
list is usually encoded cryptically inside a make file, or that you need
to submit to CMake, or that you put inside an @ file, or submit to the
compiler on one long command line, or in multiple invocations.

Here it's tidily contained within the C source code.
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
That occurred to me too. I gave an outline to invoke a special C module
to scan those #pragma entries in cases where my compiler was not used.

Such an approach could also be used to unpack a set of source files
concatenated into one big source file. This is tidier than having a
sprawling set of files perhaps split across directories.

It means you can just supply one text file.

But there are other ways that people do the same job of turning
multi-module C into a single file.
Richard Harnden
2024-01-30 19:22:00 UTC
Reply
Permalink
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
vallor
2024-01-31 16:41:21 UTC
Reply
Permalink
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.

$ make -j # how does Bart's new build manager handle this case?

("-j" engages parallel compilation.)

ObC:
$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16"));
}
_ _ _ _ _ _ _

$ cat Makefile
CFLAGS=-g -O2 -std=c90 -pedantic
_ _ _ _ _ _ _

$ make try
cc -g -O2 -std=c90 -pedantic try.c -o try

$ ./try
make: 'try' is up to date.
--
-v
vallor
2024-01-31 19:01:01 UTC
Reply
Permalink
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
$ make -j # how does Bart's new build manager handle this case?
("-j" engages parallel compilation.)
$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16"));
}
_ _ _ _ _ _ _
$ cat Makefile
CFLAGS=-g -O2 -std=c90 -pedantic
_ _ _ _ _ _ _
$ make try
cc -g -O2 -std=c90 -pedantic try.c -o try
$ ./try
make: 'try' is up to date.
I also had "try:" in my Makefile.

_ _ _ _ _ _ _
CFLAGS=-g -O2 -std=c90 -pedantic

try:
_ _ _ _ _ _ _

But I changed the source to make it
explicitely:

$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16 try"));
}

$ ./try
cc -g -O2 -std=c90 -pedantic try.c -o try

$ ./try
make: 'try' is up to date.

(Beats trying to learn COBOL to keep up with
comp.lang.c... ;)
--
-v
bart
2024-01-31 20:25:07 UTC
Reply
Permalink
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where the
details are buried under appalling syntax and mixed up with a hundred
other matters.
Post by vallor
$ make -j # how does Bart's new build manager handle this case?
("-j" engages parallel compilation.)
$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16"));
}
_ _ _ _ _ _ _
$ cat Makefile
CFLAGS=-g -O2 -std=c90 -pedantic
_ _ _ _ _ _ _
$ make try
cc -g -O2 -std=c90 -pedantic try.c -o try
$ ./try
make: 'try' is up to date.
This on the other hand looks EXACTLY like a solution looking a problem.


BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.

And as written, it only works for 'cc' which comes with 'gcc'. If I use
CC to set another compiler, then the -o option is wrong for tcc. The
other options are not recognised with two other compilers.

Look at the follow-up to my OP that I will shortly post.
David Brown
2024-02-01 08:39:15 UTC
Reply
Permalink
Post by bart
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where the
details are buried under appalling syntax and mixed up with a hundred
other matters.
No, that is not at all the purpose of modules in programming. Note that
there is no specific meaning of "module", and different languages use
different for similar concepts. There are many features that a
language's "module" system might have - some have all, some have few:

1. It lets you split the program into separate parts - generally
separate files. This is essential for scalability for large programs.

2. You can compile modules independently to allow partial builds.

3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.

4. Modules can "import" other modules, gaining access to those modules'
exported symbols.

5. Modules provide encapsulation of data, code and namespaces.

6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.

7. Modules provide a higher level concept that can be used by language
tools to see how the whole program fits together or interact with
package managers and librarian tools.


C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation. It
provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.

You seem to be thinking purely about item 7 above. This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not). Compiled
languages don't usually have such a thing, because developers (as
distinct from users) have build tools available that do a better job.
bart
2024-02-01 11:31:14 UTC
Reply
Permalink
Post by bart
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where
the details are buried under appalling syntax and mixed up with a
hundred other matters.
No, that is not at all the purpose of modules in programming.  Note that
there is no specific meaning of "module", and different languages use
different for similar concepts.  There are many features that a
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those modules'
exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by language
tools to see how the whole program fits together or interact with
package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.  It
provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above.  This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
I've been implementing languages with language-supported modules for
about 12 years.

They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.

They don't provide 2 (compiling individual modules) because the aim is a
very fast, whole-program compler.

While for 6, there is only a hierarchy between groups of modules, each
forming an independent sub-program or library. I tried a strict full
per-module hierarchy early on, mixed up with independent compilation; it
worked poorly.

The two levels allow you to assemble one binary out of groups of modules
that each represent an independent component or library.
Compiled
languages don't usually have such a thing, because developers (as
distinct from users) have build tools available that do a better job.
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.

Even with independent compilation, you might be able to use that info to
determine dependencies, but you will need that module hierarchy if you
want to compile individual modules.

My view is that that tool only needs to be the compiler (a program that
does the 'full stack' from source files to executable binary) working
purely from the source code.

Yours is to have compilers, assemblers, linkers and make programs,
working with auxiliary data in makefiles, that itself have to be
generated by extra tools or special options, or built by hand.

I see that as old-fashioned and error-prone. Also complex and limited
(eg. they will not support my compiler.)

The experiment in my OP is intended to bring part of my module scheme to C.

However, that will of course be poorly received. Why? Because when a
language doesn't provide a certain feature (eg. module management), then
people are free to do all sorts of wild and whacky things to achieve
some result.

Approaches that don't fit in to the disciplined requirements of a
language-stipulated module scheme.

A good example is the header-based module scheme of my BCC compiler;
this required modules to be implemented as tidy .h/.c pairs of files. Of
course, real C code is totally chaotic in its use of headers.

In other words, you can're retro-fit a real module-scheme to C, not one
that will work with existing code.

But for all my projects and all the ones /I/ want to build, they do come
down to just knowing what source files need to be submitted to the
compiler. It really can be that simple. That CAN be trivially
retro-fitted to existing projects.
David Brown
2024-02-01 15:11:50 UTC
Reply
Permalink
Post by bart
Post by bart
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where
the details are buried under appalling syntax and mixed up with a
hundred other matters.
No, that is not at all the purpose of modules in programming.  Note
that there is no specific meaning of "module", and different languages
use different for similar concepts.  There are many features that a
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by language
tools to see how the whole program fits together or interact with
package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above.  This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
I've been implementing languages with language-supported modules for
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure. Programming languages need that if they are to scale at all.
Post by bart
They don't provide 2 (compiling individual modules) because the aim is a
very fast, whole-program compler.
Okay.


But what you are talking about to add to C is item 7, nothing more.
That is not adding "modules" to C. Your suggestion might be useful to
some people for some projects, but that doesn't make it "modules" in any
real sense.
Post by bart
While for 6, there is only a hierarchy between groups of modules, each
forming an independent sub-program or library. I tried a strict full
per-module hierarchy early on, mixed up with independent compilation; it
worked poorly.
The two levels allow you to assemble one binary out of groups of modules
that each represent an independent component or library.
Compiled
languages don't usually have such a thing, because developers (as
distinct from users) have build tools available that do a better job.
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.
Why?

You can't just take some idea that you like, and that is suitable for
the projects you use, and assume it applies to everyone else.

I have no problem telling my build system, or compilers, where the files
are. In fact, I'd have a lot of problems if I couldn't do that. It is
not normal development practice to have the source files in the same
directory that you use for building the object code and binaries.
Post by bart
Even with independent compilation, you might be able to use that info to
determine dependencies, but you will need that module hierarchy if you
want to compile individual modules.
I already have tools for determining dependencies. What can your
methods do that mine can't?

(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand. And those who
want an IDE that figures out dependencies for them have a dozen free
options there too. These are all standard tools available to everyone.)
Post by bart
My view is that that tool only needs to be the compiler (a program that
does the 'full stack' from source files to executable binary) working
purely from the source code.
Yours is to have compilers, assemblers, linkers and make programs,
working with auxiliary data in makefiles, that itself have to be
generated by extra tools or special options, or built by hand.
You want a limited little built-in tool. I want a toolbox that I can
use in all sorts of ways - for things you have never imagined. I can
see how your personal tools can be useful for you, as a single developer
on your own - if you want something else you can add it to those tools.
For others, they are useless.

Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger. Then I'd have
something that they could not handle, and I'd reach for make. What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools to
handle the build? It's much easier just to use "make" for the whole thing.

You are offering me a fish. I am offering to teach you to fish,
including where to go to catch different kinds of fish. This is really
a no-brainer choice.
Post by bart
In other words, you can're retro-fit a real module-scheme to C, not one
that will work with existing code.
We know that. Otherwise it would have happened, long ago.
Malcolm McLean
2024-02-01 17:33:46 UTC
Reply
Permalink
Post by bart
Post by bart
Post by vallor
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
Post by Richard Harnden
Post by Malcolm McLean
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
In other words: a Makefile
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where
the details are buried under appalling syntax and mixed up with a
hundred other matters.
No, that is not at all the purpose of modules in programming.  Note
that there is no specific meaning of "module", and different
languages use different for similar concepts.  There are many
features that a language's "module" system might have - some have
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or interact
with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above.  This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
I've been implementing languages with language-supported modules for
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure.  Programming languages need that if they are to scale at all.
Post by bart
They don't provide 2 (compiling individual modules) because the aim is
a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more. That
is not adding "modules" to C.  Your suggestion might be useful to some
people for some projects, but that doesn't make it "modules" in any real
sense.
Post by bart
While for 6, there is only a hierarchy between groups of modules, each
forming an independent sub-program or library. I tried a strict full
per-module hierarchy early on, mixed up with independent compilation;
it worked poorly.
The two levels allow you to assemble one binary out of groups of
modules that each represent an independent component or library.
 > Compiled
 > languages don't usually have such a thing, because developers (as
 > distinct from users) have build tools available that do a better job.
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.
Why?
You can't just take some idea that you like, and that is suitable for
the projects you use, and assume it applies to everyone else.
I have no problem telling my build system, or compilers, where the files
are.  In fact, I'd have a lot of problems if I couldn't do that.  It is
not normal development practice to have the source files in the same
directory that you use for building the object code and binaries.
Our system is that we've got two types of source generated by us, the
libraries which are used by all the programs, and the code specific to
each program. The library source code is placed on a central server and
then downloaded by conan (a package manager) which keeps it in a private
directory in the local machine not intended for viewing. The source
specific to the program is placed in a git project and synchronised with
git's remote repository facilities. Then IDE project files are built
with CMake. These with various other derived bits and bobs are placed in
a build folder, which is always under the git repository, but placed in
the ignore file and so not under git source control. The IDE is then
invoked on the project file in the build directory, and the executables
also go into the build directory. They then need to be moved to a
different location to be run.
CMake is set up so that it recursively crawls the source directories and
places every single source file into the IDE project file. This isn't
really recommended but it means you don't have to maintain CMakeLists files.
So it's an out of tree build. But we can't just place source in some
random location on the local machine and tell the system to pull it in.
Technically you could modify the CMake script to do that. But it would
break the whole system.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
bart
2024-02-01 18:34:08 UTC
Reply
Permalink
Post by bart
Post by David Brown
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or interact
with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above.  This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
I've been implementing languages with language-supported modules for
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure.  Programming languages need that if they are to scale at all.
Post by bart
They don't provide 2 (compiling individual modules) because the aim is
a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more. That
is not adding "modules" to C.  Your suggestion might be useful to some
people for some projects, but that doesn't make it "modules" in any real
sense.
Item 7 is my biggest stumbling to building open source C projects.

While the developer (say you), knows the necessary info, and can somehow
import into the build system, my job is trying to get it out.

I can't use the intended build system because for one reason or another
it doesn't work, or requires complex dependencies (MSYS, CMake, MSTOOLS,
.configure), or I want to run mcc on it.

That info could trivially be added to the C source code. Nobody actually
needs to use my #pragma scheme; it could simply be a block comment on
one of the modules.

I'm sure with all your complicated tools, they could surely dump some
text that looks like:

// List of source files to build the binary cipher.c:
// cipher.c
// hmac.c
// sha2.c

and prepend it to one of the files. Even a README will do.

That wouldn't hurt would it?
Post by bart
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.
Why?
You can't just take some idea that you like, and that is suitable for
the projects you use, and assume it applies to everyone else.
I have no problem telling my build system, or compilers, where the files
are.  In fact, I'd have a lot of problems if I couldn't do that.  It is
not normal development practice to have the source files in the same
directory that you use for building the object code and binaries.
Post by bart
Even with independent compilation, you might be able to use that info
to determine dependencies, but you will need that module hierarchy if
you want to compile individual modules.
I already have tools for determining dependencies.  What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand.  And those who
want an IDE that figures out dependencies for them have a dozen free
options there too.  These are all standard tools available to everyone.)
So, if C were to acquire modules, so that a C compiler could determine
that all for it itself (maybe even work out for itself which need
recompiling), would you just ignore that feature and use the same
auxiliary methods you have always done?

You don't see that the language taking over task (1) of the things that
makefiles do, and possibly (2) (of the list I posted; repeated below),
can streamline makefiles to make them shorter, simpler, easier to write
and to read, and with fewer opportunities to get stuff wrong?

That was a rhetorical question. Obviously not.
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger.  Then I'd have
something that they could not handle, and I'd reach for make.  What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools to
handle the build?  It's much easier just to use "make" for the whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.

Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
You are offering me a fish.  I am offering to teach you to fish,
including where to go to catch different kinds of fish.  This is really
a no-brainer choice.
That analogy makes no sense.

Let me try and explain what I do: I write whole-program compilers. That
means that, each time you do a new build, it will reprocess each file
from source. They use the language's module scheme to know which files
to process.

I tend to build C programs by recompiling all modules too. So I want to
introduce the same convenience I have elsewhere.

It works for me, and I'm sure could work for others if they didn't have
makefiles forced down their throats and hardwired into their brains.

----------------------------
(Repost)

I've already covered this in many posts on the subject. But 'make' deals
with three kinds of requirements:

(1) Specifying what the modules are to be compiled and combined into one
binary file

(2) Specifying dependences between all files to allow rebuilding of that
one file with minimal recompilation

(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries, specifying
dependencies between binaries, installation etc

My proposal tackles only (1), which is something that many languages now
have the means to deal with themselves. I already stated that (2) is not
covered.

But you may still need makefiles to deal with (3).

If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C compiler.
Michael S
2024-02-01 20:23:28 UTC
Reply
Permalink
On Thu, 1 Feb 2024 18:34:08 +0000
Post by bart
Post by bart
Post by David Brown
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols
and facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big
modules from smaller ones to support larger libraries with many
files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or
interact with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h"
organisation. It provides a limited form of 5 (everything that is
not exported is "static"), but scaling to larger systems is
dependent on identifier prefixes.
You seem to be thinking purely about item 7 above.  This is, I
think, common in interpreted languages (where modules have to be
found at run-time, where the user is there but the developer is
not).
I've been implementing languages with language-supported modules
for about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and
partial support of 6.
Sure.  Programming languages need that if they are to scale at all.
Post by bart
They don't provide 2 (compiling individual modules) because the
aim is a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more.
That is not adding "modules" to C.  Your suggestion might be useful
to some people for some projects, but that doesn't make it
"modules" in any real sense.
Item 7 is my biggest stumbling to building open source C projects.
While the developer (say you), knows the necessary info, and can
somehow import into the build system, my job is trying to get it out.
I can't use the intended build system because for one reason or
another it doesn't work, or requires complex dependencies (MSYS,
CMake, MSTOOLS, .configure), or I want to run mcc on it.
That info could trivially be added to the C source code. Nobody
actually needs to use my #pragma scheme; it could simply be a block
comment on one of the modules.
I'm sure with all your complicated tools, they could surely dump some
// cipher.c
// hmac.c
// sha2.c
and prepend it to one of the files. Even a README will do.
That wouldn't hurt would it?
Post by bart
Given a module scheme, the tool needed to build a whole program
should not need to be told about the names and location of every
constituent module; it should be able to determine that from
what's already in the source code, given only a start point.
Why?
You can't just take some idea that you like, and that is suitable
for the projects you use, and assume it applies to everyone else.
I have no problem telling my build system, or compilers, where the
files are.  In fact, I'd have a lot of problems if I couldn't do
that.  It is not normal development practice to have the source
files in the same directory that you use for building the object
code and binaries.
Post by bart
Even with independent compilation, you might be able to use that
info to determine dependencies, but you will need that module
hierarchy if you want to compile individual modules.
I already have tools for determining dependencies.  What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand.  And those
who want an IDE that figures out dependencies for them have a dozen
free options there too.  These are all standard tools available to
everyone.)
So, if C were to acquire modules, so that a C compiler could
determine that all for it itself (maybe even work out for itself
which need recompiling), would you just ignore that feature and use
the same auxiliary methods you have always done?
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler,
easier to write and to read, and with fewer opportunities to get
stuff wrong?
That was a rhetorical question. Obviously not.
Perhaps I would find your tools worked for a "Hello, world"
project. Maybe they were still okay as it got slightly bigger.
Then I'd have something that they could not handle, and I'd reach
for make.  What would be the point of using "make" to automate -
for example - post-processing of a binary to add a CRC check, but
using your tools to handle the build?  It's much easier just to use
"make" for the whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
You are offering me a fish.  I am offering to teach you to fish,
including where to go to catch different kinds of fish.  This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers.
That means that, each time you do a new build, it will reprocess each
file from source. They use the language's module scheme to know which
files to process.
I tend to build C programs by recompiling all modules too. So I want
to introduce the same convenience I have elsewhere.
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their
brains.
----------------------------
(Repost)
I've already covered this in many posts on the subject. But 'make'
(1) Specifying what the modules are to be compiled and combined into
one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many languages
now have the means to deal with themselves. I already stated that (2)
is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C
compiler.
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.

Then David could write in his makefile:

out/foo.elf : main_foo.c
mcc -MD $< -o $@

-include out/foo.d

And then to proceed with automatiion of his pre and post-processing needs.
Scott Lurndal
2024-02-01 20:55:53 UTC
Reply
Permalink
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
Post by bart
But you may still need makefiles to deal with (3).
=20
If your main requirement /is/ only (1), then my idea is to move the=20
necessary info into the source code, and tackle it with the C
compiler.
=20
You proposal and needs of David Brown are not necessarily
contradictory.=20
Although David (and I) aren't particularly interested in
changing something that already works quite well.
Post by Michael S
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.
I suspect he may be much more difficult to satisfy on this topic.

Nobody is going to switch production software to a one-off
unsupported compiler.
Chris M. Thomasson
2024-02-01 21:10:14 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
Post by bart
But you may still need makefiles to deal with (3).
=20
If your main requirement /is/ only (1), then my idea is to move the=20
necessary info into the source code, and tackle it with the C compiler.
=20
You proposal and needs of David Brown are not necessarily
contradictory.=20
Although David (and I) aren't particularly interested in
changing something that already works quite well.
Post by Michael S
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.
I suspect he may be much more difficult to satisfy on this topic.
Nobody is going to switch production software to a one-off
unsupported compiler.
No shit. Even then, he would have to test drive it, make sure it passes
all unit tests, ect... How fun... ;^)
David Brown
2024-02-01 21:38:13 UTC
Reply
Permalink
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
Post by bart
I've already covered this in many posts on the subject. But 'make'
(1) Specifying what the modules are to be compiled and combined into
one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many languages
now have the means to deal with themselves. I already stated that (2)
is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C
compiler.
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.
out/foo.elf : main_foo.c
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing needs.
But then I'd still be using "make", and Bart would not be happy.

And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever). So
Bart's new system would disappear entirely.
Michael S
2024-02-01 22:55:38 UTC
Reply
Permalink
On Thu, 1 Feb 2024 22:38:13 +0100
Post by David Brown
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
Post by bart
I've already covered this in many posts on the subject. But 'make'
(1) Specifying what the modules are to be compiled and combined
into one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes
to generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many
languages now have the means to deal with themselves. I already
stated that (2) is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C compiler.
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
out/foo.elf : main_foo.c
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Lawrence D'Oliveiro
2024-02-01 23:31:36 UTC
Reply
Permalink
Yes, I know, you copy&past arcan macros from project to project, but you
had to write them n years ago and that surely was not easy.
And maybe you discover bugs in them in certain situations, and have to
track down all the places you copied/pasted them and fix them.

My code-reuse OCD reflex is twitching at this point.
Scott Lurndal
2024-02-02 02:08:14 UTC
Reply
Permalink
Post by Michael S
On Thu, 1 Feb 2024 22:38:13 +0100
Post by David Brown
Post by Michael S
And then to proceed with automatiion of his pre and post-processing needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
"Not easy for you" doesn't automatically translate to "not easy for
everyone else".

Difficult is the configuration file for sendmail processed by m4.

Make is easy.
David Brown
2024-02-02 08:02:15 UTC
Reply
Permalink
Post by Michael S
On Thu, 1 Feb 2024 22:38:13 +0100
Post by David Brown
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
out/foo.elf : main_foo.c
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Google "makefile automatic dependencies", then adapt to suit your own
needs. Re-use the same makefile time and again.

Yes, some of the functions I have in my makefiles are a bit hairy, and
some of the command line options for gcc are a bit complicated. They
are done now.

If there had been an easier way than this, which still let me do what I
need (Bart's system does not), which is popular enough that you can
easily google for examples, blogs, and tutorials, then I'd have been
happy to use that at the time. I won't change to something else unless
it gives me significant additional benefits.

People smarter and more experienced than Bart have been trying to invent
better replacements for "make" for many decades. None have succeeded.
Some build systems are better in some ways, but nothing has come close
to covering the wide range of features and uses of make, or gaining hold
outside a particular niche. Everyone who has ever made serious use of
"make" knows it has many flaws, unnecessarily complications, limitations
and inefficiencies. Despite that, it is the best we have.

With Bart's limited knowledge and experience, and deeply ingrained
prejudices and misunderstandings, the best we can hope for is something
that works well enough for some simple cases of C programs. More
realistically, it will work for Bart's use alone.

And that, of course, is absolutely fine. No one is paying Bart to write
a generic build system, or something of use to anyone else. He is free
to write exactly what he wants, in the way he wants, and if ends up with
a tool that he finds useful himself, that is great. If he ends up with
something that at least some other people find useful, that is even
better, and I wish him luck with his work.

But don't hold your breath waiting for something that will replace make,
or attract users of any other build system.
Michael S
2024-02-02 13:28:49 UTC
Reply
Permalink
On Fri, 2 Feb 2024 09:02:15 +0100
Post by David Brown
But don't hold your breath waiting for something that will replace
make, or attract users of any other build system.
It seems, you already forgot the context of my post that started this
short sub-thread.

BTW, I would imagine that Stu Feldman, if he is still in good health,
would fine talking with Bart more entertaining that talking with you.
I think, you, English speakers, call it birds of feather.
bart
2024-02-02 13:47:25 UTC
Reply
Permalink
Post by David Brown
Post by Michael S
On Thu, 1 Feb 2024 22:38:13 +0100
Post by David Brown
Post by Michael S
On Thu, 1 Feb 2024 18:34:08 +0000
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
out/foo.elf : main_foo.c
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Google "makefile automatic dependencies", then adapt to suit your own
needs.  Re-use the same makefile time and again.
Yes, some of the functions I have in my makefiles are a bit hairy, and
some of the command line options for gcc are a bit complicated.  They
are done now.
If there had been an easier way than this, which still let me do what I
need (Bart's system does not), which is popular enough that you can
easily google for examples, blogs, and tutorials, then I'd have been
happy to use that at the time.  I won't change to something else unless
it gives me significant additional benefits.
People smarter and more experienced than Bart have been trying to invent
better replacements for "make" for many decades.  None have succeeded.
Some build systems are better in some ways, but nothing has come close
to covering the wide range of features and uses of make, or gaining hold
outside a particular niche.  Everyone who has ever made serious use of
"make" knows it has many flaws, unnecessarily complications, limitations
and inefficiencies.  Despite that, it is the best we have.
With Bart's limited knowledge and experience,
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.

What can I possibly know about compiling sources files of a lower-level
language into binaries?

How many assemblers, compilers, linkers, and interpreters have /you/
written?
Post by David Brown
and deeply ingrained
prejudices and misunderstandings, the best we can hope for is something
that works well enough for some simple cases of C programs.
With the proposal outlined in my OP, any of MY C programs, if I was to
write or port multi-module projects in that language, could be trivially
built by giving only the name of the compiler, and the name of one module.
Post by David Brown
  More
realistically, it will work for Bart's use alone.
It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
all seem to delight in wheeling out the most complex scenarios you can find.

That is another aspect you might do well to learn how to do: KISS. (Yes
I can be a patronising fuck too.)
Post by David Brown
And that, of course, is absolutely fine.  No one is paying Bart to write
a generic build system, or something of use to anyone else.  He is free
to write exactly what he wants, in the way he wants, and if ends up with
a tool that he finds useful himself, that is great.  If he ends up with
something that at least some other people find useful, that is even
better, and I wish him luck with his work.
But don't hold your breath waiting for something that will replace make,
or attract users of any other build system.
Jesus. And you seem to determined to ignore everything I write, or have
a short memory.

I'm not suggesting replacing make, only to reduce its involvement.

Twice I posted a list of 3 things that make takes care of; I'm looking
at replacing just 1 of those things, the I which for me is more critical.
David Brown
2024-02-01 21:34:36 UTC
Reply
Permalink
Post by bart
Post by bart
Post by David Brown
1. It lets you split the program into separate parts - generally
separate files.  This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big
modules from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or
interact with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above.  This is, I
think, common in interpreted languages (where modules have to be
found at run-time, where the user is there but the developer is not).
I've been implementing languages with language-supported modules for
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure.  Programming languages need that if they are to scale at all.
Post by bart
They don't provide 2 (compiling individual modules) because the aim
is a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more.
That is not adding "modules" to C.  Your suggestion might be useful to
some people for some projects, but that doesn't make it "modules" in
any real sense.
Item 7 is my biggest stumbling to building open source C projects.
While the developer (say you), knows the necessary info, and can somehow
import into the build system, my job is trying to get it out.
I can't use the intended build system because for one reason or another
it doesn't work, or requires complex dependencies (MSYS, CMake, MSTOOLS,
.configure), or I want to run mcc on it.
That info could trivially be added to the C source code. Nobody actually
needs to use my #pragma scheme; it could simply be a block comment on
one of the modules.
I'm sure with all your complicated tools, they could surely dump some
   // cipher.c
   // hmac.c
   // sha2.c
and prepend it to one of the files.  Even a README will do.
That wouldn't hurt would it?
Complain to the people that made that open source software, not me. But
don't be surprised if they tell you "There's a makefile. It works for
everyone else." Or maybe they will say they can't cater for every
little problem with everyone's unusual computer setup. Maybe they will
try to be helpful, maybe they will be rude and arrogant. Maybe they
will point out that their makefile /is/ just a list of the files needed,
along with the compiler options. Usually projects of any size /do/ have
readme's and build instructions - but some won't.

No matter what, it is not the fault of anyone here, it is not the fault
of "make" or Linux or C, and there is nothing that any of us can do to
help you. (And $DEITY knows, we have tried.)
Post by bart
I already have tools for determining dependencies.  What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand.  And those who
want an IDE that figures out dependencies for them have a dozen free
options there too.  These are all standard tools available to everyone.)
So, if C were to acquire modules, so that a C compiler could determine
that all for it itself (maybe even work out for itself which need
recompiling), would you just ignore that feature and use the same
auxiliary methods you have always done?
That's not unlikely. Why would I change? You still haven't given any
reasons why your tools would be /better/. Even if they could do all I
needed to do for a particular project, "just as good" is not "better",
and does not encourage change.

I would still need "make" for everything else. I would, however, be
quite happy if there were some standard way to get the list of include
files needed by a C file, rather than using gcc-specific flags.
Post by bart
You don't see that the language taking over task (1) of the things that
makefiles do, and possibly (2) (of the list I posted; repeated below),
can streamline makefiles to make them shorter, simpler, easier to write
and to read, and with fewer opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles. But as far as I can
see, you are just moving the same information from a makefile into the C
files.

Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h". With my makefiles, all the "this"
and "that" is found automatically - writing the includes in the C code
is sufficient.
Post by bart
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger.  Then I'd have
something that they could not handle, and I'd reach for make.  What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build?  It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker. Compiling is the job of the compiler.
Controlling the build is the job of the build system. I don't see
monolithic applications as an advantage.
Post by bart
Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
You are offering me a fish.  I am offering to teach you to fish,
including where to go to catch different kinds of fish.  This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers. That
means that, each time you do a new build, it will reprocess each file
from source. They use the language's module scheme to know which files
to process.
Surely most sensibly organised projects could then be built with :

bcc *.c -o prog.exe

I mean, that's what I can do with gcc if I had something that doesn't
need other flags (which is utterly impractical for my work).

Or if I had lots of files, each with their own c file :

for f in *.c; do gcc $i -o ${f%.c}; done
Post by bart
It works for me, and I'm sure could work for others if they didn't have
makefiles forced down their throats and hardwired into their brains.
/Nobody/ has makefiles forced on them. People use "make" because it is
convenient, and it works. If something better comes along, and it is
better enough to overcome the familiarity momentum, people will use that.

I do a round of checking the state of the art of build tools on a
regular basis - perhaps every year or so. I look at what's popular and
what's new, to see if there's anything that would work for me and be a
step up from what I have. So far, I've not found anything that comes
very close to "make" for my needs. There's some tools that are pretty
good in many ways, but none that I can see as being a better choice for
me than "make". I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files). It appears to have some disadvantages compared to my makefiles,
such as needed to be run as an extra step when files are added or
removed to a project or dependencies are changed, but that doesn't
happen too often, and it's integration with other tools and projects
might make it an overall win. I'll need some time to investigate and
study it.

So I will happily move from "make" when I find something better - enough
better to make it worth the effort. I'll happily move from gcc, or
Linux, if I find something enough better to make it worth changing. I
regularly look at alternatives and consider them - clang is the key
challenger to gcc for my purposes.

But I have no interest in changing to something vastly more limited and
which adds nothing at all.
bart
2024-02-01 22:29:13 UTC
Reply
Permalink
Post by bart
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler, easier
to write and to read, and with fewer opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles.  But as far as I can
see, you are just moving the same information from a makefile into the C
files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h".  With my makefiles, all the "this"
and "that" is found automatically - writing the includes in the C code
is sufficient.
I don't think so. Seeing:

#include "file.h"

doesn't necessarily mean there is a matching "file.c". It might not
exist, or the header might be for some external library, or maybe it
does exist but in a different location.

Or maybe some code may use a file "fred.c", which needs to be submitted
to the compiler, but for which there is either no header used, or uses a
header file with a different name.

As I said, C's uses of .h and .c files are chaotic.

Did you have in mind using gcc's -MM option? For my 'cipher.c' demo,
that only gives a set of header names. Missing are hmac.c and sha2.c.

If I try it on lua.c, it gives me only 5 header files; the project
comprises 33 .c files and 27 .h files.
Post by bart
Post by David Brown
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger.  Then I'd have
something that they could not handle, and I'd reach for make.  What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build?  It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
There is where you're still stuck in the past.

I first got rid of a formal 'linker' about 40 years ago. I got rid of
the notion of combining independently compiled modules into an
executable a decade ago.

Linking would only come up for me if I wanted to statically combine the
outputs of several languages. Since I can't process object files, I need
to generate an object file (in my case, it represents ALL my modules),
and a traditional linker. That would be someone else's job.
  Compiling is the job of the compiler.
Controlling the build is the job of the build system.  I don't see
monolithic applications as an advantage.
I do. You type:

cc prog

without knowing or caring whether the contains that one module, or there
are 99 more.

In any case, your linker will generate a monolithic binary whether you
like it or not.

But I suspect you don't understand what a 'whole-program compiler' does:

* It means that for each binary, all sources are recompiled at the same
time to create it

* It doesn't mean that an application can only comprise one binary

* It moves the compilation unit granularity from a module to a single
EXE or DLL file

* Interfaces (in the case of a lower level language), are moved inter-
module to inter-program. The boundaries are between one program or
library and another, not between modules.

A language which claims to have a module system, but still compiles a
module at a time, will probably still have discrete inter-module
interfaces, although they may be handled automatically.
Post by bart
Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
Post by David Brown
You are offering me a fish.  I am offering to teach you to fish,
including where to go to catch different kinds of fish.  This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers.
That means that, each time you do a new build, it will reprocess each
file from source. They use the language's module scheme to know which
files to process.
    bcc *.c -o prog.exe
I mean, that's what I can do with gcc if I had something that doesn't
need other flags (which is utterly impractical for my work).
Yes, that's one technique that can be used. But few projects are like
that one. One or two, you can try *.c and it will work.

Malcolm's resource compiler is like that, but it still benefits from a
file like this:

#pragma module "*.c"
#pragma module "freetype/*.c"
#pragma module "samplerate/*.c"

here called bbx.c. I can build it like this:

c:\bbx\src>mcc bbx
Compiling bbx.c to bbx.exe
/Nobody/ has makefiles forced on them.  People use "make" because it is
convenient, and it works.
BUT IT DOESN'T. It fails a lot of the time on Windows, but they are too
complicated to figure out why. From a recent thread I made about trying
to build piet.c, it failed on extra programs that weren't needed (that
was on Linux; it didn't work at all on Windows).

This is a program which actually only needed:

cc piet.c

(Here cc *.c wouldn't work.) This mirrors pretty much what I see in most
C projects; needless complexity that muddies the waters and creates
failures.

ALL I WANT IS A LIST OF FILES. Why doesn't anybody get that? And why is
it so hard?

Apparently makefiles are superior because you don't even need to know
the name of the program (and will have to hunt for where it put the
executable because it won't tell you!).
But I have no interest in changing to something vastly more limited and
which adds nothing at all.
That's right; it adds nothing, but it takes a lot away! Like a lot of
failure points.

(Look at the Monty Hall problem, but instead of 3 doors, try it with
100, of which 98 will be opened. Then it will easy to make the right
decision because nearly all the wrong ones have been eliminated.)
Keith Thompson
2024-02-01 23:28:03 UTC
Reply
Permalink
bart <***@freeuk.com> writes:
[...]
Post by bart
As I said, C's uses of .h and .c files are chaotic.
C doesn't use .h and .c files. The C standard doesn't specify file
extensions, either for source files or for files included with #include.

It's fairly straightforward to implement something similar to "modules"
in C, using matching *.h and *.c files, include guards, and so forth,
but it requires a bit of discipline. It's a mechanism built on top of
the language, not a feature of the language itself (though of course the
language definition intentionally supports that usage).

Some projects might use .h and .c files in a chaotic manner. Most, in
my experience, do not.
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Lawrence D'Oliveiro
2024-02-02 01:03:09 UTC
Reply
Permalink
Post by Keith Thompson
The C standard doesn't specify file
extensions, either for source files or for files included with #include.
It does for the standard library includes, though.
Keith Thompson
2024-02-02 01:42:32 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Keith Thompson
The C standard doesn't specify file
extensions, either for source files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files. But yes, their names end in ".h", and that's certainly
because of the common convention to use ".h" as the extension for C
header files.
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Lawrence D'Oliveiro
2024-02-02 02:43:51 UTC
Reply
Permalink
Post by Keith Thompson
Post by Lawrence D'Oliveiro
The C standard doesn't specify file extensions, either for source
files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files.
From the C99 spec, page 149:

6.10.2 Source file inclusion
Constraints
A #include directive shall identify a header or source file that
can be processed by the implementation.

...

3 A preprocessing directive of the form
# include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of
the source file identified by the specified sequence between the "
delimiters. The named source file is searched for in an
implementation-defined manner.

So you see, the spec very explicitly uses the term “file”.

<https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>
Keith Thompson
2024-02-02 03:03:38 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Keith Thompson
Post by Lawrence D'Oliveiro
The C standard doesn't specify file extensions, either for source
files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files.
6.10.2 Source file inclusion
Constraints
A #include directive shall identify a header or source file that
can be processed by the implementation.
...
3 A preprocessing directive of the form
# include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of
the source file identified by the specified sequence between the "
delimiters. The named source file is searched for in an
implementation-defined manner.
So you see, the spec very explicitly uses the term “file”.
<https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>
Yes, but not in reference to the standard headers.

A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.

References to standard headers (stdio.h et al) always use the <> syntax.
You can write `#include "stdio.h"` if you like, but it risks picking up
a file with the same name instead of the standard header (which *might*
be what you want).

BTW, the n1256.pdf draft is a close approximation to the C99 standard;
it consists of the published standard with the three Technical
Corrigenda merged into it. The n1570.pdf draft is the last publicly
release draft before C11 was published, and is close enough to C11 for
most purposes.
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
David Brown
2024-02-02 09:54:21 UTC
Reply
Permalink
Post by Keith Thompson
Post by Lawrence D'Oliveiro
Post by Keith Thompson
Post by Lawrence D'Oliveiro
The C standard doesn't specify file extensions, either for source
files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files.
6.10.2 Source file inclusion
Constraints
A #include directive shall identify a header or source file that
can be processed by the implementation.
...
3 A preprocessing directive of the form
# include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of
the source file identified by the specified sequence between the "
delimiters. The named source file is searched for in an
implementation-defined manner.
So you see, the spec very explicitly uses the term “file”.
<https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>
Yes, but not in reference to the standard headers.
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
References to standard headers (stdio.h et al) always use the <> syntax.
You can write `#include "stdio.h"` if you like, but it risks picking up
a file with the same name instead of the standard header (which *might*
be what you want).
BTW, the n1256.pdf draft is a close approximation to the C99 standard;
it consists of the published standard with the three Technical
Corrigenda merged into it. The n1570.pdf draft is the last publicly
release draft before C11 was published, and is close enough to C11 for
most purposes.
In 7.1.2 "Standard headers", it says:

"""
Each library function is declared, with a type that includes a
prototype, in a header, 188) whose contents are made available by the
#include preprocessing directive.
"""

"Header" here is in italics, meaning it is a definition of the term.
And footnote 188 has :

"""
header is not necessarily a source file, nor are the < and > delimited
sequences in header names necessarily valid source file names.
"""

(I am quoting from n2346, the final C18 draft. The section numbering is
generally consistent between standard versions, but footnote numbers
change, in case anyone is looking this up.)


I have personally used a toolchain where the standard library headers
did not exist as files, but were internal to the compiler (and the
implementations were internal to the linker). I think the toolchain
company was a bit paranoid that others would copy their proprietary library.
tTh
2024-02-02 02:22:46 UTC
Reply
Permalink
   cc prog
without knowing or caring whether the contains that one module, or there
are 99 more.
I also do. You type:

make prog

without knowing or caring whether the contains that one module, or
there are 51 more.
--
+---------------------------------------------------------------------+
| https://tube.interhacker.space/a/tth/video-channels |
+---------------------------------------------------------------------+
bart
2024-02-02 11:13:13 UTC
Reply
Permalink
    cc prog
without knowing or caring whether the contains that one module, or
there are 99 more.
   make prog
without knowing or caring whether the contains that one module, or
there are 51 more.
Really? OK, let's try it:

c:\c>make cipher
cc cipher.c -o cipher
C:\tdm\bin\ld.exe:
C:\Users\44775\AppData\Local\Temp\ccRvFIdY.o:cipher.c:(.text+0x55a):
undefined reference to `hmac_sha256_final'

It seems I do need to care after all!

Oh, you mean I don't need to care AFTER I've created a complicated
makefile containing all those details that you claim I don't need to
bother with?

Let's try with a real solution:

c:\c>mcc cipher
Compiling cipher.c to cipher.exe


Or here's one where I don't need to add anything to the C code:

c:\c>bcc -auto cipher
1 Compiling cipher.c to cipher.asm (Pass 1)
* 2 Compiling hmac.c to hmac.asm (Pass 2)
* 3 Compiling sha2.c to sha2.asm (Pass 2)
Assembling to cipher.exe

I'm the one who's trying innovative approaches to minimise the extra
gumph you need to provide to build programs.

You're the one who needs to first write a pile of garbage within a
makefile in order for you to do:

make prog

Below is the makefile needed to build lua 5.4, which is a project of
only 35 C modules. Simple, isn't it?

---------------------------------
# Makefile for building Lua
# See ../doc/readme.html for installation and customization instructions.

# == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT
=======================

# Your platform. See PLATS for possible values.
PLAT= guess

CC= gcc -std=gnu99
CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_3 $(SYSCFLAGS) $(MYCFLAGS)
LDFLAGS= $(SYSLDFLAGS) $(MYLDFLAGS)
LIBS= -lm $(SYSLIBS) $(MYLIBS)

AR= ar rcu
RANLIB= ranlib
RM= rm -f
UNAME= uname

SYSCFLAGS=
SYSLDFLAGS=
SYSLIBS=

MYCFLAGS=
MYLDFLAGS=
MYLIBS=
MYOBJS=

# Special flags for compiler modules; -Os reduces code size.
CMCFLAGS=

# == END OF USER SETTINGS -- NO NEED TO CHANGE ANYTHING BELOW THIS LINE
=======

PLATS= guess aix bsd c89 freebsd generic ios linux linux-readline macosx
mingw posix solaris

LUA_A= liblua.a
CORE_O= lapi.o lcode.o lctype.o ldebug.o ldo.o ldump.o lfunc.o lgc.o
llex.o lmem.o lobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o
ltm.o lundump.o lvm.o lzio.o
LIB_O= lauxlib.o lbaselib.o lcorolib.o ldblib.o liolib.o lmathlib.o
loadlib.o loslib.o lstrlib.o ltablib.o lutf8lib.o linit.o
BASE_O= $(CORE_O) $(LIB_O) $(MYOBJS)

LUA_T= lua
LUA_O= lua.o

LUAC_T= luac
LUAC_O= luac.o

ALL_O= $(BASE_O) $(LUA_O) $(LUAC_O)
ALL_T= $(LUA_A) $(LUA_T) $(LUAC_T)
ALL_A= $(LUA_A)

# Targets start here.
default: $(PLAT)

all: $(ALL_T)

o: $(ALL_O)

a: $(ALL_A)

$(LUA_A): $(BASE_O)
$(AR) $@ $(BASE_O)
$(RANLIB) $@

$(LUA_T): $(LUA_O) $(LUA_A)
$(CC) -o $@ $(LDFLAGS) $(LUA_O) $(LUA_A) $(LIBS)

$(LUAC_T): $(LUAC_O) $(LUA_A)
$(CC) -o $@ $(LDFLAGS) $(LUAC_O) $(LUA_A) $(LIBS)

test:
./$(LUA_T) -v

clean:
$(RM) $(ALL_T) $(ALL_O)

depend:
@$(CC) $(CFLAGS) -MM l*.c

echo:
@echo "PLAT= $(PLAT)"
@echo "CC= $(CC)"
@echo "CFLAGS= $(CFLAGS)"
@echo "LDFLAGS= $(LDFLAGS)"
@echo "LIBS= $(LIBS)"
@echo "AR= $(AR)"
@echo "RANLIB= $(RANLIB)"
@echo "RM= $(RM)"
@echo "UNAME= $(UNAME)"

# Convenience targets for popular platforms.
ALL= all

help:
@echo "Do 'make PLATFORM' where PLATFORM is one of these:"
@echo " $(PLATS)"
@echo "See doc/readme.html for complete instructions."

guess:
@echo Guessing `$(UNAME)`
@$(MAKE) `$(UNAME)`

AIX aix:
$(MAKE) $(ALL) CC="xlc" CFLAGS="-O2 -DLUA_USE_POSIX -DLUA_USE_DLOPEN"
SYSLIBS="-ldl" SYSLDFLAGS="-brtl -bexpall"

bsd:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX -DLUA_USE_DLOPEN"
SYSLIBS="-Wl,-E"

c89:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_C89" CC="gcc -std=c89"
@echo ''
@echo '*** C89 does not guarantee 64-bit integers for Lua.'
@echo '*** Make sure to compile all external Lua libraries'
@echo '*** with LUA_USE_C89 to ensure consistency'
@echo ''

FreeBSD NetBSD OpenBSD freebsd:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX -DLUA_USE_READLINE
-I/usr/include/edit" SYSLIBS="-Wl,-E -ledit" CC="cc"

generic: $(ALL)

ios:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_IOS"

Linux linux: linux-noreadline

linux-noreadline:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX" SYSLIBS="-Wl,-E -ldl"

linux-readline:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX -DLUA_USE_READLINE"
SYSLIBS="-Wl,-E -ldl -lreadline"

Darwin macos macosx:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_MACOSX -DLUA_USE_READLINE"
SYSLIBS="-lreadline"

mingw:
$(MAKE) "LUA_A=lua54.dll" "LUA_T=lua.exe" \
"AR=$(CC) -shared -o" "RANLIB=strip --strip-unneeded" \
"SYSCFLAGS=-DLUA_BUILD_AS_DLL" "SYSLIBS=" "SYSLDFLAGS=-s" lua.exe
$(MAKE) "LUAC_T=luac.exe" luac.exe

posix:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX"

SunOS solaris:
$(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX -DLUA_USE_DLOPEN
-D_REENTRANT" SYSLIBS="-ldl"

# Targets that do not create files (not all makes understand .PHONY).
.PHONY: all $(PLATS) help test clean default o a depend echo

# Compiler modules may use special flags.
llex.o:
$(CC) $(CFLAGS) $(CMCFLAGS) -c llex.c

lparser.o:
$(CC) $(CFLAGS) $(CMCFLAGS) -c lparser.c

lcode.o:
$(CC) $(CFLAGS) $(CMCFLAGS) -c lcode.c

# DO NOT DELETE

lapi.o: lapi.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h lstring.h \
ltable.h lundump.h lvm.h
lauxlib.o: lauxlib.c lprefix.h lua.h luaconf.h lauxlib.h
lbaselib.o: lbaselib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lcode.o: lcode.c lprefix.h lua.h luaconf.h lcode.h llex.h lobject.h \
llimits.h lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h \
ldo.h lgc.h lstring.h ltable.h lvm.h
lcorolib.o: lcorolib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lctype.o: lctype.c lprefix.h lctype.h lua.h luaconf.h llimits.h
ldblib.o: ldblib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
ldebug.o: ldebug.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
lobject.h ltm.h lzio.h lmem.h lcode.h llex.h lopcodes.h lparser.h \
ldebug.h ldo.h lfunc.h lstring.h lgc.h ltable.h lvm.h
ldo.o: ldo.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h lopcodes.h \
lparser.h lstring.h ltable.h lundump.h lvm.h
ldump.o: ldump.c lprefix.h lua.h luaconf.h lobject.h llimits.h lstate.h \
ltm.h lzio.h lmem.h lundump.h
lfunc.o: lfunc.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h
lgc.o: lgc.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h lstring.h ltable.h
linit.o: linit.c lprefix.h lua.h luaconf.h lualib.h lauxlib.h
liolib.o: liolib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
llex.o: llex.c lprefix.h lua.h luaconf.h lctype.h llimits.h ldebug.h \
lstate.h lobject.h ltm.h lzio.h lmem.h ldo.h lgc.h llex.h lparser.h \
lstring.h ltable.h
lmathlib.o: lmathlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lmem.o: lmem.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h
loadlib.o: loadlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lobject.o: lobject.c lprefix.h lua.h luaconf.h lctype.h llimits.h \
ldebug.h lstate.h lobject.h ltm.h lzio.h lmem.h ldo.h lstring.h lgc.h \
lvm.h
lopcodes.o: lopcodes.c lprefix.h lopcodes.h llimits.h lua.h luaconf.h
loslib.o: loslib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lparser.o: lparser.c lprefix.h lua.h luaconf.h lcode.h llex.h lobject.h \
llimits.h lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h \
ldo.h lfunc.h lstring.h lgc.h ltable.h
lstate.o: lstate.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h llex.h \
lstring.h ltable.h
lstring.o: lstring.c lprefix.h lua.h luaconf.h ldebug.h lstate.h \
lobject.h llimits.h ltm.h lzio.h lmem.h ldo.h lstring.h lgc.h
lstrlib.o: lstrlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
ltable.o: ltable.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h lstring.h ltable.h lvm.h
ltablib.o: ltablib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
ltm.o: ltm.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h lstring.h ltable.h lvm.h
lua.o: lua.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
luac.o: luac.c lprefix.h lua.h luaconf.h lauxlib.h ldebug.h lstate.h \
lobject.h llimits.h ltm.h lzio.h lmem.h lopcodes.h lopnames.h lundump.h
lundump.o: lundump.c lprefix.h lua.h luaconf.h ldebug.h lstate.h \
lobject.h llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lstring.h lgc.h \
lundump.h
lutf8lib.o: lutf8lib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
lvm.o: lvm.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h lopcodes.h lstring.h \
ltable.h lvm.h ljumptab.h
lzio.o: lzio.c lprefix.h lua.h luaconf.h llimits.h lmem.h lstate.h \
lobject.h ltm.h lzio.h

# (end of Makefile)
Gary R. Schmidt
2024-02-02 13:25:23 UTC
Reply
Permalink
On 02/02/2024 22:13, bart wrote:
[Bitching about "make" snipped]

Try "cake", Zoltan wrote it many decades ago, when we were at
$GOSHWHATAUNIVERSITY, because he thought "make" was too prolix.

Cheers,
Gary B-)
bart
2024-02-02 13:29:53 UTC
Reply
Permalink
Post by bart
You're the one who needs to first write a pile of garbage within a
make prog
Below is the makefile needed to build lua 5.4, which is a project of
only 35 C modules. Simple, isn't it?
Post by bart
---------------------------------
# Makefile for building Lua
# See ../doc/readme.html for installation and customization instructions.
# == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT
Now this is an interesting comment. The makefile is set up for gcc. For
another compiler it won't work.

If I try to switch to 'tcc', there are a number of problems. First,
unless you do 'make clean', the .o files lying about (I guess a
consequence of being to do incremental builds), are incompatible.

At this point I discovered a bug in the makefile for Lua (you might say
it's not bug, it's one of the settings that need changing, but I've no
idea how or where):

Although this makefile works with gcc on Windows, it thinks the
executable is called 'lua', not 'lua.exe'. It will produce 'lua.exe'
with gcc, but it checks for the existence of 'lua'.

That is never present, so it always links; it never says 'is up-to-date'.

With tcc however, there's another issue: tcc requires the .exe extension
in the -o option, otherwise it writes the executable as 'lua'. Now, at
last, make sees 'lua' and deems it up-to-date. Unfortunately that won't
run under Windows.

Either not at all, or it will use the lua.exe left over from gcc. I can
bodge this by using '-o $@.exe', producing lua.exe from tcc, but make is
still checking 'lua'.

There are some minor things: tcc doesn't like the -lm option for example.

But what it comes down to is that it seems I need a separate makefile
for each compiler. As supplied, it didn't even work 100% for gcc on Windows.

That means duplicating all that file info.

This is a solution I used before, using this @ file:

------------------------------
-O2 -s -o lua.exe
lua.c lapi.c lcode.c lctype.c ldebug.c ldo.c ldump.c lfunc.c lgc.c
llex.c lmem.c lobject.c lopcodes.c lparser.c lstate.c lstring.c
ltable.c ltm.c lundump.c lvm.c lzio.c lauxlib.c lbaselib.c lcorolib.c
ldblib.c liolib.c lmathlib.c loadlib.c loslib.c lstrlib.c ltablib.c
lutf8lib.c linit.c
------------------------------


If I run it like this:

gcc @luafiles

it produces a 260KB executable. Which is another interesting thing:
using 'make lua' set up for gcc produces a 360KB executable.

But I can also run it like this:

tcc @luafiles

The same file works for both gcc and tcc.

It won't work for mcc unless I split it into two, as that first line of
options doesn't work there. However with mcc I can now just do this:

mcc lua

So two solutions for this project that (1) don't involve a makefile; (2)
work better than the makefile.

It's true that it involved recompiling every module. But tcc still
builds this project in 0.3 seconds.

This project contains 34 C files, or which 33 are needed (not 35 as I
said). That means that using *.c is not possible, unless that extra file
(I believe used when building a shared library) is renamed.

If that is done, then all compilers just need "*.c" plus whatever other
options are needed.
David Brown
2024-02-02 09:47:12 UTC
Reply
Permalink
Post by bart
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler,
easier to write and to read, and with fewer opportunities to get
stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles.  But as far as I
can see, you are just moving the same information from a makefile into
the C files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h".  With my makefiles, all the "this"
and "that" is found automatically - writing the includes in the C code
is sufficient.
    #include "file.h"
doesn't necessarily mean there is a matching "file.c". It might not
exist, or the header might be for some external library, or maybe it
does exist but in a different location.
As I said, you are duplicating things.

For my builds, I do not have anywhere that I need to specify "file.c".
Or maybe some code may use a file "fred.c", which needs to be submitted
to the compiler, but for which there is either no header used, or uses a
header file with a different name.
As I said, C's uses of .h and .c files are chaotic.
My uses of .h and .c files are not chaotic.

Maybe you can't write well-structured C programs. Certainly not
everyone can. (And /please/ do not give another list of open source
programs that you don't like. I didn't write them. I can tell you how
and why /I/ organise my projects and makefiles - I don't speak for others.)
Did you have in mind using gcc's -MM option? For my 'cipher.c' demo,
that only gives a set of header names.  Missing are hmac.c and sha2.c.
I use makefiles where gcc's "-M" options are part of the solution - not
the whole solution.
If I try it on lua.c, it gives me only 5 header files; the project
comprises 33 .c files and 27 .h files.
I don't care. I did not write lua.

But I /have/ integrated lua with one of my projects, long ago. It fit
into my makefile format without trouble - I added the lua directory as a
subdirectory of my source directory, and that was all that was needed.
Post by bart
Post by David Brown
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger.  Then I'd have
something that they could not handle, and I'd reach for make.  What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build?  It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
There is where you're still stuck in the past.
I first got rid of a formal 'linker' about 40 years ago. I got rid of
the notion of combining independently compiled modules into an
executable a decade ago.
No, you built a monolithic tool that /included/ the linker. That's fine
for niche tools that are not intended to work with anything else. Most
people work with many tools - that's why we have standards, defined file
formats, and flexible tools with wide support.

Other people got rid of monolithic tools forty years ago when they
realised it was a terrible way to organise things.
I know exactly what it does. I am entirely without doubt that I know
the point and advantages of them better than you do - the /real/ points
and advantages, not some pathetic "it means I don't have to use that
horrible nasty make program" reason.
* It means that for each binary, all sources are recompiled at the same
  time to create it
No, it does not.
* It doesn't mean that an application can only comprise one binary
Correct.
* It moves the compilation unit granularity from a module to a single
  EXE or DLL file
No, it does not.
* Interfaces (in the case of a lower level language), are moved inter-
  module to inter-program. The boundaries are between one program or
  library and another, not between modules.
Correct.
A language which claims to have a module system, but still compiles a
module at a time, will probably still have discrete inter-module
interfaces, although they may be handled automatically.
Correct.


In real-world whole program compilation systems, the focus is on
inter-module optimisations. Total build times are expected to go /up/.
Build complexity can be much higher, especially for large programs. It
is more often used for C++ than C.

The main point of a lot of whole-program compilation is to allow
cross-module optimisation. It means you can have "access" functions
hidden away in implementation files so that you avoid global variables
or inter-dependencies between modules, but now they can be inline across
modules so that you have no overhead or costs for this. It means you
can write code that is more structured and modular, with different teams
handling different parts, and with layers of abstractions, but when you
pull it all together into one whole-program build, the run-time costs
and overhead for this all disappear. And it means lots of checks and
static analysis can be done across the whole program.


For such programs, each translation unit is still compiled separately,
but the "object" files contain internal data structures and analysis
information, rather than generated code. Lots of the work is done by
this point, with inter-procedural optimisations done within the unit.
These compilations will be done as needed, in parallel, under the
control of a build system. Then they are combined for the linking and
link-time optimisation which fits the parts together. Doing this in a
scalable way is hard, and the subject of a lot of research, as you need
to partition it into chunks that can be handled in parallel on multiple
cpu cores (or even distributed amongst servers). Once you have parts of
code that are ready, they are handed on to backend compilers that do
more optimisation and generate the object code, and this in turn is
linked (sometimes incrementally in parts, again aiming at improving
parallel building and scalability.


You go to all this effort because you are building software that is used
by millions of people, and your build effort is minor compared to the
total improvements for all users combined. Or you do it because you are
building speed-critical software. Or you want the best static analysis
you can get, and want that done across modules. Or you are building
embedded systems that need to be as efficient as possible.

You don't do it because you find "make" ugly.


It is also very useful on old-fashioned microcontrollers with multiple
banks for data ram and code memory, and no good data stack access - the
compiler can do large-scale lifetime analysis and optimise placement and
the re-use of the very limited ram.
/Nobody/ has makefiles forced on them.  People use "make" because it
is convenient, and it works.
BUT IT DOESN'T.
IT DOES WORK.

People use it all the time.
It fails a lot of the time on Windows, but they are too
complicated to figure out why.
People use it all the time on Windows.

Even Microsoft ships its own version of make, "nmake.exe", and has done
for decades.

/You/ can't work it, but you excel at failing to get things working.
You have a special gift - you just have to look at a computer with tools
that you didn't write yourself, and it collapses.
But I have no interest in changing to something vastly more limited
and which adds nothing at all.
That's right; it adds nothing, but it takes a lot away! Like a lot of
failure points.
Like pretty much everything I need.
Michael S
2024-02-02 13:45:31 UTC
Reply
Permalink
On Fri, 2 Feb 2024 10:47:12 +0100
Post by David Brown
Post by bart
You don't see that the language taking over task (1) of the
things that makefiles do, and possibly (2) (of the list I posted;
repeated below), can streamline makefiles to make them shorter,
simpler, easier to write and to read, and with fewer
opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles.  But as far as
I can see, you are just moving the same information from a
makefile into the C files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h".  With my makefiles, all the
"this" and "that" is found automatically - writing the includes in
the C code is sufficient.
    #include "file.h"
doesn't necessarily mean there is a matching "file.c". It might not
exist, or the header might be for some external library, or maybe
it does exist but in a different location.
As I said, you are duplicating things.
For my builds, I do not have anywhere that I need to specify "file.c".
Or maybe some code may use a file "fred.c", which needs to be
submitted to the compiler, but for which there is either no header
used, or uses a header file with a different name.
As I said, C's uses of .h and .c files are chaotic.
My uses of .h and .c files are not chaotic.
Maybe you can't write well-structured C programs. Certainly not
everyone can. (And /please/ do not give another list of open source
programs that you don't like. I didn't write them. I can tell you
how and why /I/ organise my projects and makefiles - I don't speak
for others.)
Did you have in mind using gcc's -MM option? For my 'cipher.c'
demo, that only gives a set of header names.  Missing are hmac.c
and sha2.c.
I use makefiles where gcc's "-M" options are part of the solution -
not the whole solution.
If I try it on lua.c, it gives me only 5 header files; the project
comprises 33 .c files and 27 .h files.
I don't care. I did not write lua.
But I /have/ integrated lua with one of my projects, long ago. It
fit into my makefile format without trouble - I added the lua
directory as a subdirectory of my source directory, and that was all
that was needed.
Post by bart
Post by David Brown
Perhaps I would find your tools worked for a "Hello, world"
project. Maybe they were still okay as it got slightly bigger.
Then I'd have something that they could not handle, and I'd
reach for make.  What would be the point of using "make" to
automate - for example - post-processing of a binary to add a
CRC check, but using your tools to handle the build?  It's much
easier just to use "make" for the whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
There is where you're still stuck in the past.
I first got rid of a formal 'linker' about 40 years ago. I got rid
of the notion of combining independently compiled modules into an
executable a decade ago.
No, you built a monolithic tool that /included/ the linker. That's
fine for niche tools that are not intended to work with anything
else. Most people work with many tools - that's why we have
standards, defined file formats, and flexible tools with wide support.
Other people got rid of monolithic tools forty years ago when they
realised it was a terrible way to organise things.
Actually, nowadays monolithic tools are solid majority in programming.
I mean, programming in general, not C/C++/Fortran programming which by
itself is a [sizable] minority.
Even in C++, a majority uses non-monolithic tools well-hidden behind
front end (IDE) that makes them indistinguishable from monolithic.
Post by David Brown
I know exactly what it does. I am entirely without doubt that I know
the point and advantages of them better than you do - the /real/
points and advantages, not some pathetic "it means I don't have to
use that horrible nasty make program" reason.
* It means that for each binary, all sources are recompiled at the
same time to create it
No, it does not.
* It doesn't mean that an application can only comprise one binary
Correct.
* It moves the compilation unit granularity from a module to a
single EXE or DLL file
No, it does not.
* Interfaces (in the case of a lower level language), are moved
inter- module to inter-program. The boundaries are between one
program or library and another, not between modules.
Correct.
A language which claims to have a module system, but still compiles
a module at a time, will probably still have discrete inter-module
interfaces, although they may be handled automatically.
Correct.
In real-world whole program compilation systems, the focus is on
inter-module optimisations. Total build times are expected to go
/up/. Build complexity can be much higher, especially for large
programs. It is more often used for C++ than C.
The main point of a lot of whole-program compilation is to allow
cross-module optimisation. It means you can have "access" functions
hidden away in implementation files so that you avoid global
variables or inter-dependencies between modules, but now they can be
inline across modules so that you have no overhead or costs for this.
It means you can write code that is more structured and modular,
with different teams handling different parts, and with layers of
abstractions, but when you pull it all together into one
whole-program build, the run-time costs and overhead for this all
disappear. And it means lots of checks and static analysis can be
done across the whole program.
For such programs, each translation unit is still compiled
separately, but the "object" files contain internal data structures
and analysis information, rather than generated code. Lots of the
work is done by this point, with inter-procedural optimisations done
within the unit. These compilations will be done as needed, in
parallel, under the control of a build system. Then they are
combined for the linking and link-time optimisation which fits the
parts together. Doing this in a scalable way is hard, and the
subject of a lot of research, as you need to partition it into chunks
that can be handled in parallel on multiple cpu cores (or even
distributed amongst servers). Once you have parts of code that are
ready, they are handed on to backend compilers that do more
optimisation and generate the object code, and this in turn is linked
(sometimes incrementally in parts, again aiming at improving parallel
building and scalability.
You go to all this effort because you are building software that is
used by millions of people, and your build effort is minor compared
to the total improvements for all users combined. Or you do it
because you are building speed-critical software. Or you want the
best static analysis you can get, and want that done across modules.
Or you are building embedded systems that need to be as efficient as
possible.
You don't do it because you find "make" ugly.
It is also very useful on old-fashioned microcontrollers with
multiple banks for data ram and code memory, and no good data stack
access - the compiler can do large-scale lifetime analysis and
optimise placement and the re-use of the very limited ram.
/Nobody/ has makefiles forced on them.  People use "make" because
it is convenient, and it works.
BUT IT DOESN'T.
IT DOES WORK.
People use it all the time.
It fails a lot of the time on Windows, but they are too
complicated to figure out why.
People use it all the time on Windows.
Even Microsoft ships its own version of make, "nmake.exe", and has
done for decades.
/You/ can't work it, but you excel at failing to get things working.
You have a special gift - you just have to look at a computer with
tools that you didn't write yourself, and it collapses.
But I have no interest in changing to something vastly more
limited and which adds nothing at all.
That's right; it adds nothing, but it takes a lot away! Like a lot
of failure points.
Like pretty much everything I need.
Lawrence D'Oliveiro
2024-02-01 23:30:14 UTC
Reply
Permalink
Post by David Brown
I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files).
Ninja was created as an alternative to Make. Basically, if your Makefiles
are going to be generated by a meta-build system like CMake or Meson, then
they don’t need to support the kinds of niceties that facilitate writing
them by hand. So you strip it write down to the bare-bones functionality,
which makes your builds fast while consuming minimal resources, and that
is Ninja.
Post by David Brown
It appears to have some disadvantages compared to my makefiles,
such as needed to be run as an extra step when files are added or
removed to a project or dependencies are changed, but that doesn't
happen too often, and it's integration with other tools and projects
might make it an overall win.
Some are proposing Meson as an alternative to CMake. I think they are
saying that the fact that its scripting language is not fully Turing-
equivalent is an advantage.

Me, while I think the CMake language can be a little clunky in places, I
still think having Turing-equivalence is better than not having it. ;)
David Brown
2024-02-02 10:05:22 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by David Brown
I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files).
Ninja was created as an alternative to Make.
It is an alternative to some uses of make - but by no means all uses.
Post by Lawrence D'Oliveiro
Basically, if your Makefiles
are going to be generated by a meta-build system like CMake or Meson, then
they don’t need to support the kinds of niceties that facilitate writing
them by hand. So you strip it write down to the bare-bones functionality,
which makes your builds fast while consuming minimal resources, and that
is Ninja.
Yes.

It is not normal to write ninja files by hand - the syntax is relatively
simple, but quite limited. So it covers the lower level bits of "make",
but not the higher level bits.


Perhaps ninja is the tool that Bart is looking for? For the kinds of
things he is doing, I don't think it would be hard to write the ninja
files by hand.



So it won't work for my needs, as I want to work at a higher level
(without manually detailing file lists and dependencies).

But if I find that CMake supports all I need at that level, then I
expect I could just as easily generate ninja files as makefiles. The
only issue that I know of is that ninja does not have full jobserver
support, which could be important if the build involves other parallel
tasks (like gcc LTO linking).
Post by Lawrence D'Oliveiro
Post by David Brown
It appears to have some disadvantages compared to my makefiles,
such as needed to be run as an extra step when files are added or
removed to a project or dependencies are changed, but that doesn't
happen too often, and it's integration with other tools and projects
might make it an overall win.
Some are proposing Meson as an alternative to CMake. I think they are
saying that the fact that its scripting language is not fully Turing-
equivalent is an advantage.
Me, while I think the CMake language can be a little clunky in places, I
still think having Turing-equivalence is better than not having it. ;)
For many reasons, CMake is the prime candidate as an alternative to make
for my use.
Malcolm McLean
2024-02-02 00:26:09 UTC
Reply
Permalink
Post by bart
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their brains.
/Nobody/ has makefiles forced on them.  People use "make" because it is
convenient, and it works.  If something better comes along, and it is
better enough to overcome the familiarity momentum, people will use that.
What?
You have total control of your programming environment and never have to
consider anybody else? For hobby programming you do in a way. Not if you
want other people to use your stuff. But can always say that fun of
doing things exactly your way outweighs the fun of getting downloads.

But for professional or academic programming, often you'll find you have
to use make. You don't have a choice. Either someone else took the
decision, or there are so many other people who expect that build shall
be via make that you have no real alternative.

Now in one study, someone had wanted to do a survey of genetic sequence
analysis software. They reported no results for half the programs,
because they had attempted to build them, and failed. They didn't say,
but it's a fair bet that most of those build systems used make. The
software distribution system is a disaster and badly needs fixing.

But there are lots of caveats. Bart's system might be better, but it as
you say it needs traction. I'd be reluctant to evangelise for it and get
everyone to use it at work, because it might prove to have major
drawbacks, and then I'd get the blame. Which I wouldn't if I wrote a
makefile which broke. Not in the same way. And of course one person
can't rigorously test and debug, and buid an ecosystem of ancilliary
tools, dcumentation, resoruces, help meesage boards. However a lot of
things start small, with one lone programmer beavering away in his
bedroom. It's necessary to look at the positives, and not strangle
things at birth.
--
Check out Basic Algorithms and my other books:
https://www.lulu.com/spotlight/bgy1mm
bart
2024-02-02 00:35:23 UTC
Reply
Permalink
Post by Malcolm McLean
Post by bart
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their brains.
/Nobody/ has makefiles forced on them.  People use "make" because it
is convenient, and it works.  If something better comes along, and it
is better enough to overcome the familiarity momentum, people will use
that.
What?
You have total control of your programming environment and never have to
consider anybody else? For hobby programming you do in a way. Not if you
want other people to use your stuff. But can always say that fun of
doing things exactly your way outweighs the fun of getting downloads.
But for professional or academic programming, often you'll find you have
to use make. You don't have a choice. Either someone else took the
decision, or there are so many other people who expect that build shall
be via make that you have no real alternative.
Now in one study, someone had wanted to do a survey of genetic sequence
analysis software. They reported no results for half the programs,
because they had attempted to build them, and failed. They didn't say,
but it's a fair bet that most of those build systems used make. The
software distribution system is a disaster and badly needs fixing.
But there are lots of caveats. Bart's system might be better, but it as
you say it needs traction. I'd be reluctant to evangelise for it and get
everyone to use it at work, because it might prove to have major
drawbacks, and then I'd get the blame.
There's a lite, flexible version of it, which doesn't interfere with any
existing uses of 'make'.

That is to also provide a simple list the C files somewhere, in a
comment, or text files. Plus any other notes needed to build the project
(written in English or Norwegian, I don't care; Norwegian will be decode
to understand than a typical makefile).

This is exactly what you did with the resource compiler, specifying the
three lots of *.c files needed to build it; no makefiles or CMake needed
(which failed if you remember).
David Brown
2024-02-02 10:13:42 UTC
Reply
Permalink
Post by Malcolm McLean
Post by bart
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their brains.
/Nobody/ has makefiles forced on them.  People use "make" because it
is convenient, and it works.  If something better comes along, and it
is better enough to overcome the familiarity momentum, people will use
that.
What?
You have total control of your programming environment and never have to
consider anybody else? For hobby programming you do in a way. Not if you
want other people to use your stuff. But can always say that fun of
doing things exactly your way outweighs the fun of getting downloads.
Okay, none of the people talking about "make" /here/ had it forced on
them for the uses they are talking about /here/.

Yes, I have a very large degree of control over my programming
environment - because I work in a company where employees get to make
the decisions that they are best qualified to make, and management's job
is to support them. One of the important factors I consider is
interaction with colleagues and customers, for which "make" works well.

And while people may be required to use make, or particular compilers,
or OS's, no one is forced to /like/ a tool or find it useful. I believe
that when people here say they like make, or find it works well for
them, or that it can handle lots of different needs, or that they know
of nothing better for their requirements, they are being honest about
that. If they didn't like it, they would say.

The only person here whom we can be absolutely sure does /not/ have
"make" forced upon them for their development, is Bart. And he is the
one who complains about it.
bart
2024-02-02 10:54:22 UTC
Reply
Permalink
Post by David Brown
Post by Malcolm McLean
Post by bart
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their brains.
/Nobody/ has makefiles forced on them.  People use "make" because it
is convenient, and it works.  If something better comes along, and it
is better enough to overcome the familiarity momentum, people will
use that.
What?
You have total control of your programming environment and never have
to consider anybody else? For hobby programming you do in a way. Not
if you want other people to use your stuff. But can always say that
fun of doing things exactly your way outweighs the fun of getting
downloads.
Okay, none of the people talking about "make" /here/ had it forced on
them for the uses they are talking about /here/.
Yes, I have a very large degree of control over my programming
environment - because I work in a company where employees get to make
the decisions that they are best qualified to make, and management's job
is to support them.  One of the important factors I consider is
interaction with colleagues and customers, for which "make" works well.
And while people may be required to use make, or particular compilers,
or OS's, no one is forced to /like/ a tool or find it useful.  I believe
that when people here say they like make, or find it works well for
them, or that it can handle lots of different needs, or that they know
of nothing better for their requirements, they are being honest about
that.  If they didn't like it, they would say.
The only person here whom we can be absolutely sure does /not/ have
"make" forced upon them for their development, is Bart.  And he is the
one who complains about it.
Not for my own development, no. Unless that includes having to build
external dependenceies from source, which are written in C.

Or just things I want to test my C compiler on.

If I want to build Seed7, for example, that comes with 19 different
makefiles. LibJPEG has 15 different makefiles. GMP has one makefiles,
but a 30,000-line configure script dependent on Linux.

I could and have spent a lot of time on many of those in manually
discovering the C files necessary to building the project.

Once done, the process was beautifully streamlined and simple.

But I know this a waste of time and nobody's mind is going to be changed.
Janis Papanagnou
2024-02-02 00:46:36 UTC
Reply
Permalink
I've nothing against shorter or simpler makefiles. [...]
During mid/late 1990's someone at our site looked for an alternative
to Make. After some evaluation of tools it was decided to not replace
Make. I've just googled for what at that time appeared to be the most
promising candidate (it's obviously still there) and the description
of Jam reads as it would fulfill some of the requirements that have
been mentioned by various people here (see https://freetype.org/jam/
for details).

Janis
Kaz Kylheku
2024-02-01 16:20:24 UTC
Reply
Permalink
Post by David Brown
5. Modules provide encapsulation of data, code and namespaces.
Case study: C++ originally had only classes which provie this. Then it
acquired the namespace construct which also provides it.. In spite of
that, someone decided it needs modules also.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
Lawrence D'Oliveiro
2024-02-01 21:34:31 UTC
Reply
Permalink
Post by David Brown
2. You can compile modules independently to allow partial builds.
In our Comp Sci classes we were careful to draw a distinction between
“separate” and “independent” compilation. The latter is exemplified by
(old-style) Fortran and C, where the same name may be declared in multiple
units, and the linker will happily tie them together, but without any
actual checking that the declarations match.

“Separate” compilation, on the other hand, means that there is some
consistency checking done between the declarations, and the program will
fail to build if there are mismatches. Ada has this. And it looks like
Fortran has acquired it, too, since the Fortran 90 spec.
Richard Harnden
2024-02-01 16:09:53 UTC
Reply
Permalink
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
bart
2024-02-01 17:32:01 UTC
Reply
Permalink
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
Scott Lurndal
2024-02-01 19:25:12 UTC
Reply
Permalink
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that? They sure used to have them
as an add-on. IIRC, they're still part of visual studio.
bart
2024-02-01 19:51:53 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that? They sure used to have them
as an add-on. IIRC, they're still part of visual studio.
Visual Studio is a 10,000MB monster. It might well have it around, but
it's so complex, it's been years since I've even seen discrete cl.exe
and link.exe programs, despite scouring massive, 11-deep directory
structures.

Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB for C.

I like to keep things simple. Everybody else likes to keep things
complicated, and the more the better.

Anyway, acquiring VS just to build one small program would be like just
a giant sledgehammer, 1000 times normal size, to crack a tiny nut.
Chris M. Thomasson
2024-02-01 20:12:59 UTC
Reply
Permalink
Post by bart
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that?  They sure used to have them
as an add-on.  IIRC, they're still part of visual studio.
Visual Studio is a 10,000MB monster.
Shit happens. I still use MSVC, quite a lot actually. I install
everything! ;^) Have the space, so, well, okay. ;^)
Post by bart
It might well have it around, but
it's so complex, it's been years since I've even seen discrete cl.exe
and link.exe programs, despite scouring massive, 11-deep directory
structures.
Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB for C.
I like to keep things simple. Everybody else likes to keep things
complicated, and the more the better.
Anyway, acquiring VS just to build one small program would be like just
a giant sledgehammer, 1000 times normal size, to crack a tiny nut.
Chris M. Thomasson
2024-02-01 20:43:51 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by bart
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that?  They sure used to have them
as an add-on.  IIRC, they're still part of visual studio.
Visual Studio is a 10,000MB monster.
Shit happens. I still use MSVC, quite a lot actually. I install
everything! ;^) Have the space, so, well, okay. ;^)
The fat bastard wants me to update to version (17.8.6). I currently have
(17.8.5):

:^)



Ham On! LOL! ;^)
Post by Chris M. Thomasson
Post by bart
It might well have it around, but it's so complex, it's been years
since I've even seen discrete cl.exe and link.exe programs, despite
scouring massive, 11-deep directory structures.
Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB for C.
I like to keep things simple. Everybody else likes to keep things
complicated, and the more the better.
Anyway, acquiring VS just to build one small program would be like
just a giant sledgehammer, 1000 times normal size, to crack a tiny nut.
Michael S
2024-02-01 20:36:47 UTC
Reply
Permalink
On Thu, 1 Feb 2024 19:51:53 +0000
Post by bart
Post by Scott Lurndal
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to
be part of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that? They sure used to have them
as an add-on. IIRC, they're still part of visual studio.
Visual Studio is a 10,000MB monster. It might well have it around,
but it's so complex, it's been years since I've even seen discrete
cl.exe and link.exe programs, despite scouring massive, 11-deep
directory structures.
If you only download command-line build tools then it's somewhat less
huge.
2022 version is 3,152,365,436 bytes.
I don't know the size of installation package. It looks like on my home
PC I used online installer.
Post by bart
Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB for C.
I like to keep things simple. Everybody else likes to keep things
complicated, and the more the better.
Anyway, acquiring VS just to build one small program would be like
just a giant sledgehammer, 1000 times normal size, to crack a tiny
nut.
David Brown
2024-02-01 22:09:48 UTC
Reply
Permalink
Post by Richard Harnden
Post by bart
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
Those are part of MSVC, which runs on Windows but does not come with it.
"nmake" is MS's version of "make", and has been shipped with most MS
development tools for many decades.
Lawrence D'Oliveiro
2024-02-01 23:32:46 UTC
Reply
Permalink
"nmake" is MS's version of "make" ...
I think they did originally have a tool called “make”. But this was so
crap in comparison to the GNU/POSIX equivalent that they changed the name
in the new version to try to distance themselves from the bad taste the
old version left in people’s mouths.
Lawrence D'Oliveiro
2024-01-31 21:17:37 UTC
Reply
Permalink
Post by vallor
$ make -j
The last time I tried that on an FFmpeg build, it brought my machine to
its knees. ;)
David Brown
2024-02-01 08:48:45 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by vallor
$ make -j
The last time I tried that on an FFmpeg build, it brought my machine to
its knees. ;)
Sometimes "make -j" can be a bit enthusiastic about the number of
processes it starts. If there are many operations it /could/ do, trying
to run them all can chew through a lot more memory than you'd like. I
usually use something like "make -j 8", though the ideal number of
parallel tasks depends on the number of cpu cores you have, their type
(SMT threads or real cores, "big" cores or "little" cores), memory,
speed of disks, additional tools like ccache or distcc, etc.

I'd rather "make -j" (without a number) defaulted to using the number of
cpu cores, as that is a reasonable guess for most compilations.
Keith Thompson
2024-02-01 19:49:36 UTC
Reply
Permalink
David Brown <***@hesbynett.no> writes:
[...]
Post by David Brown
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Lawrence D'Oliveiro
2024-02-01 21:39:40 UTC
Reply
Permalink
Post by Keith Thompson
Post by David Brown
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
Keith Thompson
2024-02-01 23:24:00 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Keith Thompson
Post by David Brown
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
And GNU make is not, so it's possible that a system might have make but
not nproc. Also, nproc was added to GNU Coreutils in 2009, and the
current meaning of "make -j" with no numeric argument was defined before
that.

A new "-J" option that means "-j $(nproc)" might be useful, but it's
easy enough to use "make -j $(nproc)".
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
Working, but not speaking, for Medtronic
void Void(void) { Void(); } /* The recursive call of the void */
Lawrence D'Oliveiro
2024-02-01 23:38:17 UTC
Reply
Permalink
Post by Keith Thompson
Post by Lawrence D'Oliveiro
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
And GNU make is not, so it's possible that a system might have make but
not nproc.
While that is theoretically possible, I somehow think such an installation
would feel to the typical *nix user somewhat ... crippled.

Particularly since the “install” command is part of coreutils.

Also imagine trying to do builds, or any kind of development, on a system
without the “mkdir” command--another component of coreutils.
Kaz Kylheku
2024-02-01 23:53:03 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Keith Thompson
Post by Lawrence D'Oliveiro
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
And GNU make is not, so it's possible that a system might have make but
not nproc.
While that is theoretically possible, I somehow think such an installation
would feel to the typical *nix user somewhat ... crippled.
Selected GNU programs can be individually installed on Unix-like systems
which already have other tools of their own.
Post by Lawrence D'Oliveiro
Particularly since the “install” command is part of coreutils.
The install utility appeared in 4.2 BSD, which was released in
August 1983.

The GNU Project was announced in September 1983.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
David Brown
2024-02-01 22:14:05 UTC
Reply
Permalink
Post by Keith Thompson
[...]
Post by David Brown
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
gcc manages to figure it out for parallel tasks, such as LTO linking. I
think it would be reasonable enough to have it use the number of cores
when it was able to figure it out, and a default (say, 4) when it could not.
David Brown
2024-01-30 08:17:51 UTC
Reply
Permalink
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
   cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
You are absolutely right that C does not have any real kind of module
system, and that can be a big limitation compared to other languages.
However, I don't think the build system is where the lack of modules is
an issue - it is the scaling of namespaces and identifier clashes that
are the key challenge for large C projects.

Building is already solved - "make" handles everything from tiny
projects to huge projects. When "make" isn't suitable, you need /more/,
not less - build server support, automated build and test systems, etc.
And for users who like simpler things and have simpler projects, IDE's
are almost certainly a better option and will handle project builds.

I don't doubt that your build system is simpler and easier for the type
of project for which it can work - but I doubt that there are many
people who work with such limited scope projects and who don't already
have a build method that works for their needs. Still, if it is useful
for you, and useful for some other people, then that makes it useful.
bart
2024-01-30 12:09:01 UTC
Reply
Permalink
Post by David Brown
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
    cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
You are absolutely right that C does not have any real kind of module
system, and that can be a big limitation compared to other languages.
However, I don't think the build system is where the lack of modules is
an issue - it is the scaling of namespaces and identifier clashes that
are the key challenge for large C projects.
Building is already solved - "make" handles everything from tiny
projects to huge projects.  When "make" isn't suitable, you need /more/,
not less - build server support, automated build and test systems, etc.
And for users who like simpler things and have simpler projects, IDE's
are almost certainly a better option and will handle project builds.
I've already covered this in many posts on the subject. But 'make' deals
with three kinds of requirements:

(1) Specifying what the modules are to be compiled and combined into one
binary file

(2) Specifying dependences between all files to allow rebuilding of that
one file with minimal recompilation

(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries, specifying
dependencies between binaries, installation etc

My proposal tackles only (1), which is something that many languages now
have the means to deal with themselves. I already stated that (2) is not
covered.

But you may still need makefiles to deal with (3).

If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C compiler.

Then no separate script or 'make' utility is needed.

I also outlined a way to make this work with any existing compiler.
(Needs an extra C module. Effectively the list of #pragmas becomes a
script which is processed by this module. But no extra language is
needed; only C.)
Chris M. Thomasson
2024-01-30 23:25:56 UTC
Reply
Permalink
Post by David Brown
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
    cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
You are absolutely right that C does not have any real kind of module
system, and that can be a big limitation compared to other languages.
However, I don't think the build system is where the lack of modules is
an issue - it is the scaling of namespaces and identifier clashes that
are the key challenge for large C projects.
Yup. ct_*, ct_experimental_*, ct_test_*, ect...

Namespaces in C are fun... ;^)
Post by David Brown
Building is already solved - "make" handles everything from tiny
projects to huge projects.  When "make" isn't suitable, you need /more/,
not less - build server support, automated build and test systems, etc.
And for users who like simpler things and have simpler projects, IDE's
are almost certainly a better option and will handle project builds.
I don't doubt that your build system is simpler and easier for the type
of project for which it can work - but I doubt that there are many
people who work with such limited scope projects and who don't already
have a build method that works for their needs.  Still, if it is useful
for you, and useful for some other people, then that makes it useful.
Chris M. Thomasson
2024-01-31 01:50:03 UTC
Reply
Permalink
Post by Chris M. Thomasson
Post by David Brown
Post by bart
Post by Lawrence D'Oliveiro
Post by bart
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means
of compiling assemblies of modules.
C doesn't.
    cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
That is significant advance on what C compilers typically do.
You are absolutely right that C does not have any real kind of module
system, and that can be a big limitation compared to other languages.
However, I don't think the build system is where the lack of modules
is an issue - it is the scaling of namespaces and identifier clashes
that are the key challenge for large C projects.
Yup. ct_*, ct_experimental_*, ct_test_*, ect...
Namespaces in C are fun... ;^)
The really fun part. Somebody else has the initials CT, so I have to use
cmt_* for the main prefix. ;^)
Post by Chris M. Thomasson
Post by David Brown
Building is already solved - "make" handles everything from tiny
projects to huge projects.  When "make" isn't suitable, you need
/more/, not less - build server support, automated build and test
systems, etc. And for users who like simpler things and have simpler
projects, IDE's are almost certainly a better option and will handle
project builds.
I don't doubt that your build system is simpler and easier for the
type of project for which it can work - but I doubt that there are
many people who work with such limited scope projects and who don't
already have a build method that works for their needs.  Still, if it
is useful for you, and useful for some other people, then that makes
it useful.
Lawrence D'Oliveiro
2024-01-31 03:14:34 UTC
Reply
Permalink
Post by David Brown
You are absolutely right that C does not have any real kind of module
system ...
Guess which language, which was already considered a bit ancient when C
became popular, has a module system now?

Fortran.
Chris M. Thomasson
2024-02-01 04:38:31 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by David Brown
You are absolutely right that C does not have any real kind of module
system ...
Guess which language, which was already considered a bit ancient when C
became popular, has a module system now?
Fortran.
Some people like apples, others might like oranges?
Tim Rentsch
2024-01-31 00:46:56 UTC
Reply
Permalink
[description of a rudimentary C build system]
What was described is what I might call the easiest and
least important part of a build system.

Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two), here are
some characteristics essential for my workflow (given
in no particular order):

* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)

* use different flag settings for different translation
units

* be able to express dependency information

* produece generated source files, sometimes based
on other source files

* be able to invoke arbitrary commands, including
user-written scripts or other programs

* build or rebuild some outputs only when necessary

* condition some processing steps on successful
completion of other processing steps

* deliver partially built as well as fully built
program units

* automate regression testing and project archival
(in both cases depending on completion status)

* produce sets of review locations for things like
program errors or TBD items

* express different ways of combining compiler
outputs (such as .o files) depending on what
is being combined and what output is being
produced (sometimes a particular set of inputs
will be combined in several different ways to
produce several different outputs)

Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
Lawrence D'Oliveiro
2024-01-31 03:13:20 UTC
Reply
Permalink
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
Kaz Kylheku
2024-01-31 03:23:46 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
David Brown
2024-01-31 07:47:20 UTC
Reply
Permalink
Post by Kaz Kylheku
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program. I have never tried to build Blender, so I can't comment on
this particular project, but if it is done right then I don't see a big
problem. (If it is done wrong, requiring multiple "make" invocations or
something like that, then it can be annoying.)

For distros trying to make good meta-build systems, something like that
is minor compared to C source files using __DATE__ and __TIME__ (or even
worse, $Id$) to generate version numbers.
Spiros Bousbouras
2024-01-31 11:02:35 UTC
Reply
Permalink
On Wed, 31 Jan 2024 08:47:20 +0100
Post by David Brown
Post by Kaz Kylheku
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement
between behaviour and documentation , the developers must have the necessary
self-discipline to modify the documentation when they make changes in the
behaviour. If they have such self-discipline then it's no harder to modify a
separate documentation file than it is to modify the part of the source code
which prints the --help output. Personally , I have the file(s) with the
documentation as additional tabs in the same vim session where other tabs
have the source code.

Regarding development effort , it's actually more development effort to have
the documentation as source code which prints a message. If it is source code
then you have the usual requirements for good documentation *plus* the
requirement that it is embedded within legal source code in the language you
are writing. If for example the text of the documentation has somewhere a
double quote then you have to take into account whether a double quote has a
special meaning in the programming language you are using (and likely it will
have). If the documentation is a separate file then you don't have this
additional requirement.

Also , the output of --help should be a short reminder whereas
documentation should be longer , possibly much longer , possibly containing a
tutorial , depending on how complex the application is.
Post by David Brown
I have never tried to build Blender, so I can't comment on
this particular project, but if it is done right then I don't see a big
problem. (If it is done wrong, requiring multiple "make" invocations or
something like that, then it can be annoying.)
--
vlaho.ninja/menu
David Brown
2024-01-31 14:31:29 UTC
Reply
Permalink
Post by Spiros Bousbouras
On Wed, 31 Jan 2024 08:47:20 +0100
Post by David Brown
Post by Kaz Kylheku
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement
between behaviour and documentation , the developers must have the necessary
self-discipline to modify the documentation when they make changes in the
behaviour. If they have such self-discipline then it's no harder to modify a
separate documentation file than it is to modify the part of the source code
which prints the --help output. Personally , I have the file(s) with the
documentation as additional tabs in the same vim session where other tabs
have the source code.
They must document the user-visible features in (at least) two places -
the "man" page, and the "--help" output. By using automation to
generate one of these from the other, they reduce the duplicated effort.
Post by Spiros Bousbouras
Also , the output of --help should be a short reminder whereas
documentation should be longer , possibly much longer , possibly containing a
tutorial , depending on how complex the application is.
The same applies to "man" pages. Sometimes it makes sense to have short
"--help" outputs and longer "man" pages, but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable. And
I imagine that the documentation for blender, along with its tutorials
(as you say), is many orders of magnitude more than that. Keeping the
"man" page and "--help" output the same seems sensible here.
Scott Lurndal
2024-01-31 15:13:20 UTC
Reply
Permalink
Post by David Brown
Post by Spiros Bousbouras
On Wed, 31 Jan 2024 08:47:20 +0100
Post by David Brown
Post by Kaz Kylheku
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement
between behaviour and documentation , the developers must have the necessary
self-discipline to modify the documentation when they make changes in the
behaviour. If they have such self-discipline then it's no harder to modify a
separate documentation file than it is to modify the part of the source code
which prints the --help output. Personally , I have the file(s) with the
documentation as additional tabs in the same vim session where other tabs
have the source code.
They must document the user-visible features in (at least) two places -
the "man" page, and the "--help" output. By using automation to
generate one of these from the other, they reduce the duplicated effort.
Indeed. In our case, we generate the manpages using nroff and
the simulator 'help' command will call system("man ${INSTALL_LOC}/man/topic.man")
to display the help text. We also process the manpage source files with troff
to generate pages appended to the end of the users guide (troff MOM
macro set) PDF.

Only one place (the manpage source file) need be updated.
Lawrence D'Oliveiro
2024-01-31 23:00:58 UTC
Reply
Permalink
... and the simulator 'help' command will call system("man
${INSTALL_LOC}/man/topic.man")
Agh! Why do people feel the need to go through a shell where a shell is
not needed?
Scott Lurndal
2024-02-01 00:29:23 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
... and the simulator 'help' command will call system("man
${INSTALL_LOC}/man/topic.man")
Agh! Why do people feel the need to go through a shell where a shell is
not needed?
Because 'system()' works and it a lot less code than
fork and exec?

How would you display an manpage using nroff markup
from an application?
Lawrence D'Oliveiro
2024-02-01 03:07:04 UTC
Reply
Permalink
How would you display an manpage using nroff markup from an application?
Much safer:

subprocess.run \
(
args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
)
Scott Lurndal
2024-02-01 15:00:03 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
How would you display an manpage using nroff markup from an application?
subprocess.run \
(
args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
)
WTF?

You are aware you are posting to comp.lang.c, right?
Lawrence D'Oliveiro
2024-02-01 23:40:21 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Lawrence D'Oliveiro
How would you display an manpage using nroff markup from an
application?
subprocess.run \
(
args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
)
You are aware you are posting to comp.lang.c, right?
Yes. Nevertheless, this is the clearest and most concise (read: least work
involved for me) way of explaining what I mean; I will leave it to the C
experts to translate it into their preferred lower-level way of doing
things.
vallor
2024-02-01 08:15:52 UTC
Reply
Permalink
On Wed, 31 Jan 2024 23:00:58 -0000 (UTC), Lawrence D'Oliveiro
Post by Lawrence D'Oliveiro
... and the simulator 'help' command will call system("man
${INSTALL_LOC}/man/topic.man")
Agh! Why do people feel the need to go through a shell where a shell is
not needed?
In my case: it wasn't so much of a "need" -- more of a "want". :)

Generally, I'd agree with you; but let's say Programmer Joe decides to
change his path to run his own special version of make(1). (Maybe he's on
an ancient SunOS system with gnu make in (say) /opt/gnu/bin. You know --
weird Unix stuff.)

So who are we to decide "no gnus allowed"?

Okay, maybe that's a weak example -- but
yes, I wouldn't use system(3) in any program
that needs to be very specific about what it passes
on to its children. (I wouldn't -- but someone else might.)
--
-v
Lawrence D'Oliveiro
2024-01-31 23:02:32 UTC
Reply
Permalink
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
So it is your considered opinion, then, that the bash man page is
“unsuitable”?

***@theon:~> man bash | wc -l
5276

Actually I refer to it quite a lot. Being able to use search functions
helps.
Scott Lurndal
2024-02-01 00:33:52 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
So it is your considered opinion, then, that the bash man page is
“unsuitable”?
5276
Actually I refer to it quite a lot. Being able to use search functions
helps.
When working with the ksh man page, I use vim.

function viman
{
a=$(mktemp absXXXXXXX)
man "$1" | col -b > ${a}
vim ${a}
rm ${a}
}


$ viman ksh
Janis Papanagnou
2024-02-01 14:11:48 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Lawrence D'Oliveiro
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
So it is your considered opinion, then, that the bash man page is
“unsuitable”?
5276
Actually I refer to it quite a lot. Being able to use search functions
helps.
When working with the ksh man page, I use vim.
function viman
{
a=$(mktemp absXXXXXXX)
man "$1" | col -b > ${a}
vim ${a}
rm ${a}
}
$ viman ksh
In some modern shells (ksh, bash, zsh) you may use process substitution
and avoid creating a temporary file (it simplifies things)...

vim <(man "$1" | col -b)


Janis
Janis Papanagnou
2024-02-01 13:55:20 UTC
Reply
Permalink
Post by David Brown
They must document the user-visible features in (at least) two places -
the "man" page, and the "--help" output. By using automation to
generate one of these from the other, they reduce the duplicated effort.
Post by Spiros Bousbouras
Also , the output of --help should be a short reminder whereas
documentation should be longer , possibly much longer , possibly containing a
tutorial , depending on how complex the application is.
The same applies to "man" pages. Sometimes it makes sense to have short
"--help" outputs and longer "man" pages, but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable. And
I imagine that the documentation for blender, along with its tutorials
(as you say), is many orders of magnitude more than that. Keeping the
"man" page and "--help" output the same seems sensible here.
Ksh93 has chosen an interesting path here; they have a powerful getopts
command (to parse the command line options), and have extended the well
known simple option-string format to allow to specify with actually an
own language all about options (type, defaults, forms, purpose, etc.).
This allows an automated generation of output in some forms (HTML, man,
etc.) with every command that uses ksh93 getopts to parse the options
(try 'getopts --man' [in ksh93] for details).

There are a couple approaches (Eiffel extracts some properties inherent
to the language from the source, Javs extracts user defined doxygen
entries, etc.).

Janis
Janis Papanagnou
2024-02-01 13:42:13 UTC
Reply
Permalink
Post by Spiros Bousbouras
On Wed, 31 Jan 2024 08:47:20 +0100
Post by David Brown
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
This statement was also to me an initiator of some neurons firing
and make me recall the development processes we used (in some large
projects, not in one-man-shows)...
Post by Spiros Bousbouras
I don't see how it achieves such tasks. For preventing loss of agreement
between behaviour and documentation , the developers must have the necessary
self-discipline to modify the documentation when they make changes in the
behaviour.
This self-discipline can be supported by tools. If we had to change
things (due to feature request or bug) we opened a 'feature or bug
request', established a 'track' and associated some 'fix-records'
to the track. The 'request' contained the description, requirements,
or specification, the individual 'fix-records' were, e.g., for code,
or documentation, or dependent parts, and separately assigned.
A (typical?) method to organize things and not forget an important
step or product part.

(Note: The used 'keywords' are approximations and may not actually
match the tool's literals.)
Post by Spiros Bousbouras
If they have such self-discipline then it's no harder to modify a
separate documentation file than it is to modify the part of the source code
which prints the --help output. Personally , I have the file(s) with the
documentation as additional tabs in the same vim session where other tabs
have the source code.
[...]
Janis
bart
2024-01-31 12:19:10 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Tim Rentsch
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
I do things a bit more simply. The '-help' text for my MCC program is
implemented with this line in the original source:

println strinclude("help.txt")

The help text is just a regular text file. So often you see reams of
printf statements containing strings full of escape codes...


But I can do complex too, if this is what we're trying to show off.

I needed to produce (some time ago...) a printed manual for my
application (CAD software), which contained a mix of text, tables,
images and vector diagrams:

- The text was created with an ordinary text editor

- It incorporated runoff-like commands that I'd devised

- It was processed by a program I'd written in a scripting language.

- That program ran under the application in question

- It rendered it a page at a time, which was then processed by the app's
PostScript driver, then sent it to an actual PostScript printer

- I also wrote the manual

- I also wrote the whole CAD application

- I created both the language used for the app, and the scripting language

- I /implemented/ both languages, one of them in itself

- Oh, and I wrote the text editor!

- I believe this was done pre-Windows, which meant also writing various
drivers for graphics adaptors, all the libraries needed to draw stuff
including GUI, providing bitmap and vector fonts, writing printer and
plotter drivers (of which the PS/EPS driver was one), etc etc


So this was not just orchestrating various bits of pre-existing
software, which makes Unix people feel so superior because they can do:

x | a | b | c > y

instead of (using default file extensions):

a x
b x
c x
bart
2024-02-01 14:29:06 UTC
Reply
Permalink
Post by Tim Rentsch
[description of a rudimentary C build system]
What was described is what I might call the easiest and
least important part of a build system.
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two), here are
some characteristics essential for my workflow (given
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
* use different flag settings for different translation
units
* be able to express dependency information
* produece generated source files, sometimes based
on other source files
* be able to invoke arbitrary commands, including
user-written scripts or other programs
* build or rebuild some outputs only when necessary
* condition some processing steps on successful
completion of other processing steps
* deliver partially built as well as fully built
program units
* automate regression testing and project archival
(in both cases depending on completion status)
* produce sets of review locations for things like
program errors or TBD items
* express different ways of combining compiler
outputs (such as .o files) depending on what
is being combined and what output is being
produced (sometimes a particular set of inputs
will be combined in several different ways to
produce several different outputs)
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two),
So, will a specific build of such a project produce a single EXE/DLL//SO
file? (The // includes the typical file extension of Linux executables.)

This is all I want for a build.

I guess if you wrote your program in a language XXX that provided this
build process for example:

xxxc -build leadmodule.xxx

you would find it equally unusable because it doesn't provide the
flexibility you're accustomed to from the chaotic, DIY nature of your
current methods.

The idea is that you have a a tool that provides the basic build process
as illustrated with the xxxc example, and you superimpose any custom
requirements on top of that, making use of whatever customisation
abilities it does provide.

An analogy would be switching to a language that doesn't have C's
preprocessor. If your coding style depends on macros that yield random
bits of syntax, or to use conditional blocks to arbitrarily choose which
lines to process, then you can also dismiss it as unusable.
David Brown
2024-02-01 15:43:58 UTC
Reply
Permalink
Post by bart
Post by Tim Rentsch
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two),
So, will a specific build of such a project produce a single EXE/DLL//SO
file? (The // includes the typical file extension of Linux executables.)
This is all I want for a build.
I my current project, when I run "make" it builds 5 different
executables, each in three formats with different post-processing by
other programs (not the compiler or linker). Most of my projects have
fewer, but four or five outputs is not at all uncommon. It is also
common that a few of the source files are generated by other programs as
part of the build. So if I have an embedded web server in the program,
I can change an html file and "make" will result in that being in the
encrypted download image ready for deployment.

Your tools can't do what I need for a lot of my work. Maybe they could
be useable for some projects or programs. But why would I bother with
them when I already need more serious and flexible tools for other
things, already have these better tools, and those better tools work
simply and easily for the simple and easy projects that your ones could
handle?
bart
2024-01-31 20:36:22 UTC
Reply
Permalink
Post by bart
Working with Other Compilers
----------------------------
Clearly, my scheme will only work with a suitable modified compiler.
Without that, then I considered doing something like this, adding this
    #pragma module "cipher.c"
    #pragma module "hmac.c"
    #pragma module "sha2.c"
    #ifndef __MCC__
        #include "runcc.c"
        int main(void) {
            runcc(__FILE__);
        }
    #endif
I tried to do a proof of concept today. But there's one problem I'm not
sure how to get around yet. However, the odd behaviour of gcc comes to
the rescue here.

Going with the same 3-file test project, I created this version of the
above:

#pragma module "cipher.c"
#pragma module "hmac.c"
#pragma module "sha2.c"

#ifndef __MCC__
#include "runcc.c"

int main(int n, char** args) {
char* compiler = (n>=2 ? args[1] : "tcc");

runcc(compiler, __FILE__);
}
#endif

runcc.c is 100 lines of code, but it is only to test the idea works.


First build this short program with any compiler, here using gcc:

c:\c>gcc demo.c

Now run the a.exe file produced, here shown in two different ways:

c:\c>a
Invoking compiler: tcc -o demo.exe cipher.c hmac.c sha2.c
Finished building: demo.exe

c:\c>a gcc
Invoking compiler: gcc -o demo.exe cipher.c hmac.c sha2.c
Finished building: demo.exe

c:\c>demo
argument count incorrect! ...

It defaults to using tcc to build, but a compiler can be provided as
shown. It wasn't possible to pick up the compiler used to build 'demo.c'.

The main problem is that if demo.c is compiled to demo.exe (the stub
program that reads the #pragmas from demo.c and invokes the compiler),
it is not possible for demo.exe to then build the application as
'demo.exe'; they will clash, Windows doesn't allow it anyway.

So gcc's a.exe helps for this demo.
Loading...