Gnu C Library On

This document tries to answer questions a user might have when installing and using glibc. Please make sure you read this before sending questions or bug reports to the maintainers.

Installation: How to install the GNU C library. Maintenance: How to enhance and port the GNU C Library. Contributors: Who wrote what parts of the GNU C library. Free Manuals: Free Software Needs Free Documentation. Copying: The GNU Lesser General Public License says how you can copy and share the GNU C Library. The GNU C Library version 2.34 is now available. Software Release. Posted by 5 minutes ago. The GNU C Library version 2.34 is now available. It was discovered that the GNU C Library incorrectly handled certain signal trampolines on PowerPC. A remote attacker could use this issue to cause the GNU C Library to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2020-1751) It was discovered that the GNU C Library incorrectly handled tilde expansion.

Gnu c library online

The GNU C library is very complex. The installation process has not been completely automated; there are too many variables. You can do substantial damage to your system by installing the library incorrectly. Make sure you understand what you are undertaking before you begin.

Contents

  1. Frequently Asked Questions about the GNU C Library
    1. Compiling glibc
    2. Installation and configuration issues
    3. Source and binary incompatibilities
    4. Runtime
    5. Developing Applications
    6. Miscellaneous

Compiling glibc

What systems does the GNU C Library run on?

Please see the file README for upto date details.

The GNU C Library supports these configurations for using Linux kernels:

  • i[4567]86-*-linux-gnu
  • x86_64-*-linux-gnu
  • powerpc-*-linux-gnu Hardware floating point required
  • powerpc64-*-linux-gnu
  • s390-*-linux-gnu
  • s390x-*-linux-gnu
  • sh[34]-*-linux-gnu Requires Linux 2.6.11 or newer
  • sparc*-*-linux-gnu
  • sparc64*-*-linux-gnu

Additional configurations are part of the ports directory, see the README for details.

What tools do I need to build GNU libc?

You need:

  • GCC, both the c compiler and the c++ compiler (for the testsuite)
  • GNU binutils
  • GNU make
  • perl
  • GNU awk
  • GNU sed
  • On Linux: The header files of the Linux kernel

Developers that modify glibc might need additionally:

  • gperf
  • GNU autoconf
  • GNU gettext
  • GNU texinfo

For details, see the manual section on 'Tools for Compilation' or read the INSTALL file in the glibc sources.

What version of the Linux kernel headers should be used?

The headers from the most recent Linux kernel should be used. The headers used while compiling the GNU C library and the kernel binary used when using the library do not need to match. The GNU C library runs without problems on kernels that are older than the kernel headers used. The other way round (compiling the GNU C library with old kernel headers and running on a recent kernel) does not necessarily work as expected. For example you can't use new kernel features if you used old kernel headers to compile the GNU C library.

Even if you are using an older kernel on your machine, we recommend you compile GNU libc with the most current kernel headers. That way you won't have to recompile libc if you ever upgrade to a newer kernel. To tell libc which headers to use, give configure the --with-headers switch (e.g. --with-headers=/usr/src/linux-3.3/include).

To install Linux kernel headers, run make headers_install in the kernel source tree. This is described in the kernel documentation.

When I run `nm -u libc.so' on the produced library I still find unresolved symbols. Can this be ok?

Yes, this is ok. There can be several kinds of unresolved symbols:

  • magic symbols automatically generated by the linker. These have names like __start_* and __stop_*

  • symbols starting with _dl_* come from the dynamic linker
  • weak symbols, which need not be resolved at all (fabs for example)

Generally, you should make sure you find a real program which produces errors while linking before deciding there is a problem.

What are these `add-ons'?

To enhance glibc there are additional add-ons which might be distributed as separate packages. Currently the libidn add-on is part of glibc and no add-ons are distributed as separate packages.

To use these packages as part of GNU libc, just unpack the tarfiles in the libc source directory and tell the configuration script about them using the --enable-add-ons option. If you give just --enable-add-ons configure tries to find all the add-on packages in your source tree. If you want to select only a subset of the add-ons, give a comma-separated list of the add-ons to enable:

  • configure --enable-add-ons=libidn

for example.

Add-ons can add features (including entirely new shared libraries), override files, provide support for additional architectures, and just about anything else. The existing makefiles do most of the work; only some few stub rules must be written to get everything running.

Most add-ons are tightly coupled to a specific GNU libc version. Please check that the add-ons work with the version of GNU libc you use.

With glibc 2.20 the nptl and ports add-ons, with with glibc 2.2 the crypt add-on, and with glibc 2.1 the localedata add-on have been integrated into the normal glibc distribution, nptl, ports, crypt, and localedata are therefore not anymore add-ons. Also, the linuxthreads add-on is obsolete with the usage of nptl.

Gnu C Library Reference

My kernel emulates a floating-point coprocessor for me. Should I enable --with-fp?

This is only relevant for certain platforms like PowerPC or MIPS. The configuration of GNU libc must be consistent with the ABI that your compiler uses: both must be configured the same way.

An emulated FPU is just as good as a real one, as far as the C library and compiler are concerned. You only need to say --without-fp, and configure your compiler accordingly, if your machine has no way to execute floating-point instructions.

People who are interested in squeezing the last drop of performance out of their machine may wish to avoid the trap overhead by doing so.

Why do I get messages about missing thread functions when I use librt? I don't even use threads.

In this case you probably mixed up your installation. librt uses threads internally and has implicit references to the thread library. Normally these references are satisfied automatically but if the thread library is not in the expected place you must tell the linker where it is. When using GNU ld it works like this:

  • gcc -o foo foo.c -Wl,-rpath-link=/some/other/dir -lrt

The /some/other/dir' should contain the thread library. ld' will use the given path to find the implicitly referenced library while not disturbing any other link path.

I get failures during `make check'. What should I do?

The testsuite should compile and run cleanly on your system; every failure should be looked into. Depending on the failures, you probably should not install the library at all.

You should consider reporting it in bugzilla providing as much detail as possible. If you run a test directly, please remember to set up the environment correctly. You want to test the compiled library - and not your installed one. The best way is to copy the exact command line which failed and run the test from the subdirectory for this test in the sources.

There are some failures which are not directly related to the GNU libc:

  • Some compilers produce buggy code. No compiler gets single precision complex numbers correct on Alpha. Otherwise, gcc-3.2 should be ok.
  • The kernel might have bugs. For example the tst-cpuclock2 test needs a fix that went in Linux 3.1 (patch).

What is symbol versioning good for? Do I need it?

Symbol versioning solves problems that are related to interface changes. One version of an interface might have been introduced in a previous version of the GNU C library but the interface or the semantics of the function has been changed in the meantime. For binary compatibility with the old library, a newer library needs to still have the old interface for old programs. On the other hand, new programs should use the new interface. Symbol versioning is the solution for this problem. The GNU libc uses symbol versioning by default unless it gets disabled via a configure switch.

We don't advise building without symbol versioning, since you lose binary compatibility - forever! The binary compatibility you lose is not only against the previous version of the GNU libc but also against all future versions. This means that you will not be able to execute programs that others have compiled.

How can I compile on my fast ix86 machine a working libc for an older and slower ix86? After installing libc, programs abort with 'Illegal Instruction'.

glibc and gcc might generate some instructions on your machine that aren't available on an older machine. You've got to tell glibc that you're configuring for e.g. i586 with adding i586 as your machine, for example:

  • ../configure --prefix=/usr i586-pc-linux-gnu

And you need to tell gcc to only generate i586 code, just add -march=i586 (just -m586 doesn't work) to your CFLAGS.

Note that i486 is the oldest supported architecture since nptl needs atomic instructions and those were introduced with i486.

`make' fails when running rpcgen the first time, what is going on? How do I fix this?

The first invocation of rpcgen is also the first use of the recently compiled dynamic loader. If there is any problem with the dynamic loader it will more than likely fail to run rpcgen properly. This could be due to any number of problems.

The only real solution is to debug the loader and determine the problem yourself. Please remember that for each architecture there may be various patches required to get glibc HEAD into a runnable state. The best course of action is to determine if you have all the required patches.

Why do I get:`#error 'glibc cannot be compiled without optimization', when trying to compile GNU libc with GNU CC?

There are a couple of reasons why the GNU C library will not work correctly if it is not complied with optimization.

In the early startup of the dynamic loader (_dl_start), before relocation of the PLT, you cannot make function calls. You must inline the functions you will use during early startup, or call compiler builtins (__builtin_*).

Without optimizations enabled GNU CC will not inline functions. The early startup of the dynamic loader will make function calls via an unrelocated PLT and crash.

Without auditing the dynamic linker code it would be difficult to remove this requirement.

Another reason is that nested functions must be inlined in many cases to avoid executable stacks.

In practice there is no reason to compile without optimizations, therefore we require that GNU libc be compiled with optimizations enabled.

Installation and configuration issues

How do I install all of the GNU C Library project libraries that I just built?

The GNU Library is not just a single library, but is in fact a collection of libraries that include the C library, math library, threading libraries, DNS stub resolver library, name service libraries. All of these libraries together constitute 'the implementation.'

The only pedantically correct way to install these libraries is to install them first into a temporary directory such as /tmp/glibc via make install DESTDIR=/tmp/glibc, then copy that directory into an initial root disk, boot the initial root disk, and copy the results to your root filesystem, and then pivot into the root filesystem as the final step of booting. That is the *only* safe way to install glibc today.

Notice however that no distribution does this. They don't do it because it would *require* a reboot after a glibc install, and at present that's only required if you want all processes running to reload glibc after a security update (since already running processes will still be running the old library). Instead the distributions use a package manager to unpack an archive of the libraries and install them into a running system. This is actually quite dangerous because at some point in time you will have a mixed copy of the libraries on your system, some will be new, some will be old, and that may cause, for that small window of time, all newly executed processes to fail to start. In a similar fashion during the upgrade the localization archive that contains locales for languages will be rebuilt, and during that period processes may fail to start if their needed localization language is missing (not yet rebuilt into the archive). Even the package management system is not immune, for example rpm has to take measures not to exec new processes while installing glibc, using instead a builtin lua interpreter to run scripts, to allow rpm to run using the old copies of the libraries as a cohesive whole, while installing the new copies.

In summary, the best way to install glibc is to install it from another system into the disk you're using, usually this can be done in an initial root disk, the next best way is via static or carefully crafted application that can copy the new files into place without itself trying to execute new processes with an incomplete partial install. Choose one or the other. Eventually the latter will become unsupportable on heavily loaded systems that share a runtime.

Linux note: Take care when building and testing new core runtimes over and over again. You may find you run out of disk space if you don't terminate old processes that use those libraries. If any process holds the file open in a mapping, like would be done for a shared library, despite the fact that you delete the file with unlink, the file is not removed until the mapping is undone (or the process exits). Therefore if you have lots of old running processes, each may be holding open a set of core libraries at different versions that you will not see on your filesystem, but they take up space on your disk because the kernel VFS layer can't delete them because they are needed to run those processes. Eventually when you reboot and all processses are shutdown that space can be reclaimed and there will be only one copy of the libraries on disk to be used, but until then it won't happen.

How do I configure GNU libc so that the essential libraries like libc.so go into /lib and the other into /usr/lib?

Like all other GNU packages GNU libc is designed to use a base directory and install all files relative to this. The default is /usr/local, because this is safe (it will not damage the system if installed there). If you wish to install GNU libc as the primary C library on your system, set the base directory to /usr (i.e. run configure --prefix=/usr <other_options>).

Some systems like Linux have a filesystem standard which makes a difference between essential libraries and others. Essential libraries are placed in /lib because this directory is required to be located on the same disk partition as /. The /usr subtree might be found on another partition/disk. If you configure for Linux with --prefix=/usr, then this will be done automatically.

To install the essential libraries which come with GNU libc in /lib on systems other than Linux one must explicitly request it. Autoconf has no option for this so you have to use a configparms' file (see the INSTALL' file for details). It should contain:

The first line specifies the directory for the essential libraries, the second line the directory for system configuration files.

Do I need to use GNU CC to compile programs that will use the GNU C Library?

In theory, no; the linker does not care, and the headers are supposed to check for GNU CC before using its extensions to the C language.

However, there are currently no ports of glibc to systems where another compiler is the default, so no one has tested the headers extensively against another compiler. You may therefore encounter difficulties. If you do, please report them as bugs.

Also, in several places GNU extensions provide large benefits in code quality. For example, the library has hand-optimized, inline assembly versions of some string functions. These can only be used with GCC.

Looking through the shared libc file I haven't found the functions `stat', `lstat', `fstat', and `mknod' and while linking on my Linux system I get error messages. How is this supposed to work?

Believe it or not, stat and lstat (and fstat, and mknod) are supposed to be undefined references in libc.so.6! Your problem is probably a missing or incorrect /usr/lib/libc.so file; note that this is a small text file now, not a symlink to libc.so.6. It should look something like this:

Programs using libc have their messages translated, but other behavior is not localized (e.g. collating order); why?

Translated messages are automatically installed, but the locale database that controls other behaviors is not. You need to run localedef to install this database, after you have run `make install'. For example, to set up the French Canadian locale, simply issue the command

Please see localedata/README in the source tree for further details.

How do I create the databases for NSS?

If you have an entry 'db' in /etc/nsswitch.conf you should also create the database files. The glibc sources contain a Makefile which does the necessary conversion and calls to create those files. The file is db-Makefile' in the subdirectory nss' and you can call it with `make -f db-Makefile'. Please note that not all services are capable of using a database.

Even statically linked programs need some shared libraries which is not acceptable for me. What can I do?

In glibc 2.27, released in 2018, support for statically linked programs that call dlopen was deprecated. The intent was to simplify support for statically linked programs and avoid the problematic use cases that users created by mixing the two linkage models. Users should either link statically or dynamically, picking the model that best meets their needs while avoiding a hybrid mix of the two.

Internally glibc continues to use dlopen for several major subsystems including NSS, gconv, IDN, and thread cancellation. For example NSS (for details just type info libc 'Name Service Switch') won't work properly without shared libraries. NSS allows using different services (e.g. NIS, files, db, hesiod) by just changing one configuration file (/etc/nsswitch.conf) without relinking any programs. The disadvantage is that now static programs or libraries need to access shared libraries to load the NSS plugins to resolve the identity management (IdM) query. A solution to this problem for statically linked application has been proposed but not implemented and involves the potential use of /usr/bin/getent and an IPC mechanism to allow statically linked applications to call out to getent to implement the IdM APIs.

Lastly, you could configure glibc with --enable-static-nss, but this is not recommend. In this case you can create a static binary that will use only the services dns and files (change /etc/nsswitch.conf for this). You need to link explicitly against all these services. For example:

The problem with this approach is that you've got to link every static program that uses NSS routines with all those libraries. In fact, one cannot say anymore that a glibc compiled with this option is using NSS. There is no switch anymore. Thus using --enable-static-nss makes the behaviour of the programs on the system inconsistent.

I need lots of open files. What do I have to do?

This is at first a kernel issue. The kernel defines limits with OPEN_MAX the number of simultaneous open files and with FD_SETSIZE the number of used file descriptors. You need to change these values in your kernel and recompile the kernel so that the kernel allows more open files. You don't necessarily need to recompile the GNU C library since the only place where OPEN_MAX and FD_SETSIZE is really needed in the library itself is the size of fd_set which is used by select.

The GNU C library is now select free. This means it internally has no limits imposed by the fd_set type. Instead all places where the functionality is needed the poll function is used.

If you increase the number of file descriptors in the kernel you don't need to recompile the C library.

You can always get the maximum number of file descriptors a process is allowed to have open at any time using

This will work even if the kernel limits change.

Why shall glibc never get installed on GNU/Linux systems in /usr/local?

Glibc

The GNU C compiler treats /usr/local/include and /usr/local/lib in a special way, these directories will be searched before the system directories. Since on GNU/Linux the system directories /usr/include and /usr/lib contain a --- possibly different --- version of glibc and mixing certain files from different glibc installations is not supported and will break, you risk breaking your complete system. If you want to test a glibc installation, use another directory as argument to --prefix. If you like to install this glibc version as default version, overriding the existing one, use --prefix=/usr and everything will go in the right places.

Source and binary incompatibilities

The prototypes for `connect', `accept', `getsockopt', `setsockopt', `getsockname', `getpeername', `send', `sendto', and `recvfrom' are different in GNU libc from any other system I saw. This is a bug, isn't it?

No, this is no bug. GNU libc already follows the Single Unix specifications (and I think the POSIX.1g draft which adopted the solution). The type for a parameter describing a size is socklen_t.

Why don't signals interrupt system calls anymore?

By default GNU libc uses the BSD semantics for signal(), unlike Linux libc 5 which used System V semantics. This is partially for compatibility with other systems and partially because the BSD semantics tend to make programming with signals easier.

There are three differences:

  • BSD-style signals that occur in the middle of a system call do not
    • affect the system call; System V signals cause the system call to

      fail and set errno to EINTR.

  • BSD signal handlers remain installed once triggered. System V signal
    • handlers work only once, so one must reinstall them each time.
  • A BSD signal is blocked during the execution of its handler. In other
    • words, a handler for SIGCHLD (for example) does not need to worry about being interrupted by another SIGCHLD. It may, however, be interrupted by other signals.

There is general consensus that for `casual' programming with signals, the BSD semantics are preferable. You don't need to worry about system calls returning EINTR, and you don't need to worry about the race conditions associated with one-shot signal handlers.

If you are porting an old program that relies on the old semantics, you can quickly fix the problem by changing signal() to sysv_signal() throughout. Alternatively, define _XOPEN_SOURCE before including <signal.h>.

For new programs, the sigaction() function allows you to specify precisely how you want your signals to behave. All three differences listed above are individually switchable on a per-signal basis with this function.

If all you want is for one specific signal to cause system calls to fail and return EINTR (for example, to implement a timeout) you can do this with siginterrupt().

I've got errors compiling code that uses certain string functions. Why?

glibc has special string functions that are faster than the normal library functions. Some of the functions are additionally implemented as inline functions and others as macros. This might lead to problems with existing codes but it is explicitly allowed by ISO C.

The optimized string functions are only used when compiling with optimizations (-O1 or higher). The behavior can be changed with two feature macros:

  • __NO_STRING_INLINES: Don't do any string optimizations.

  • __USE_STRING_INLINES: Use assembly language inline functions (might increase code size dramatically).

Since some of these string functions are now additionally defined as macros, code like 'char *strncpy();' doesn't work anymore (and is unnecessary, since <string.h> has the necessary declarations). Either change your code or define __NO_STRING_INLINES.

Another problem in this area is that gcc still has problems on machines with very few registers (e.g., ix86). The inline assembler code can require almost all the registers and the register allocator cannot always handle this situation.

One can disable the string optimizations selectively. Instead of writing

one can write

This disables the optimization for that specific call.

I get compiler messages 'Initializer element not constant' with stdin/stdout/stderr. Why?

Constructs like:

Gnu C Library On

Gnu C Library On Windows

lead to this message. This is correct behaviour with glibc since stdin is not a constant expression. Please note that a strict reading of ISO C does not allow above constructs.

One of the advantages of this is that you can assign to stdin, stdout, and stderr just like any other global variable (e.g. stdout = my_stream;), which can be very useful with custom streams that you can write with libio (but beware this is not necessarily portable). The reason to implement it this way were versioning problems with the size of the FILE structure.

To fix those programs you've got to initialize the variable at run time. This can be done, e.g. in main, like:

or by constructors (beware this is gcc specific):

I get some errors with `gcc -ansi'. Isn't glibc ANSI compatible?

The GNU C library is compatible with the ANSI/ISO C standard. If you're using `gcc -ansi', the glibc includes which are specified in the standard follow the standard. The ANSI/ISO C standard defines what has to be in the include files - and also states that nothing else should be in the include files (btw. you can still enable additional standards with feature flags).

The GNU C library is conforming to ANSI/ISO C - if and only if you're only using the headers and library functions defined in the standard.

I can't access some functions anymore. nm shows that they do exist but linking fails nevertheless.

With the introduction of versioning in glibc 2.1 it is possible to export only those identifiers (functions, variables) that are really needed by application programs and by other parts of glibc. This way a lot of internal interfaces are now hidden. nm will still show those identifiers but marking them as internal. ISO C states that identifiers beginning with an underscore are internal to the libc. An application program normally shouldn't use those internal interfaces (there are exceptions, e.g. __ivaliduser). If a program uses these interfaces, it's broken. These internal interfaces might change between glibc releases or dropped completely.

The sys/sem.h file lacks the definition of `union semun'.

Nope. This union has to be provided by the user program. Former glibc versions defined this but it was an error since it does not make much sense when thinking about it. The standards describing the System V IPC functions define it this way and therefore programs must be adopted.

My program segfaults when I call fclose() on the FILE* returned from setmntent(). Is this a glibc bug?

No. Don't do this. Use endmntent(), that's what it's for.

In general, you should use the correct deallocation routine. For instance, if you open a file using fopen(), you should deallocate the FILE * using fclose(), not free(), even though the FILE * is also a pointer.

In the case of setmntent(), it may appear to work in most cases, but it won't always work. Unfortunately, for compatibility reasons, we can't change the return type of setmntent() to something other than FILE *.

I get 'undefined reference to `atexit'.

This means that your installation is somehow broken. The situation is the same as for stat(), fstat(), etc (see question 2.7). Investigate why the linker does not pick up libc_nonshared.a.

If a similar message is issued at runtime this means that the application or DSO is not linked against libc. This can cause problems since atexit() is not exported anymore.

Runtime

Why does copying via 'cp' of an in-use shared object sometimes result in a crash of my program?

Gnu C Library On

The GNU/Linux version of cp (1) (coreutils) modifies files in-place, and shared objects are mapped by the dynamic loader from disk a piece at a time as they're referenced. Thus, if you copy over the file, the in-memory image might have some parts of the old version and some parts of the new version, and this is why your program crashes.

While there are ways to 'snapshot' a file on disk (MAP_COPY) to prevent this, doing so is expensive, and not supported on Linux (Linus Torvald's comments on this here and here (use case for MAP_COPY)).

You can update a shared object on disk atomically and safely by using the rename (3) function, or mv (1) and rename (1) instead of cp (1), which allows programs that are still using the old version to continue using it. This, of course, will require more disk space, and some programs may continue using the old version longer than you'd expect.

Why doesn't my application automatically notice changes to `/etc/resolv.conf?`

This issue was fixed in glibc 2.26 (bug 984) released on 2017-08-02, but the FAQ entry remains to answer the question for older versions of the library.

The long-standing tradition in Unix has been that applications load /etc/resolv.conf on demand, and once loaded will not change the contents of the internal resolver structure until res_init() or res_ninit() have been called again. This means that an application is responsible for polling /etc/resolv.conf for changes in the structure of the network. This choice was made a long time ago when networks were relatively static structures. Modern systems, particularly mobile devices, move from network to network, and need a more robust and automatic handling of resolver status. If you have a glibc older than 2.26 then your best workarounds are:

  • Use a local resolver that can be updated dynamically e.g. nscd, local validating resolver etc.
  • Add a polling loop to look for /etc/resolv.conf changes and take appropriate actions.

Developing Applications

What is the authoritative source for public glibc APIs?

The GNU C Library manual is the authoritative place for such information that is related to the implementation of functions in glibc.

The Linux Man Pages are non-authoritative, but they are incredibly useful, easy to use, and often the first source of such information.

The Linux Man Pages is generally authoritative on kernel syscalls, and we have worked hard in cases like futex to ensure the documentation is clear enough for all C libraries.

We should all work together to keep both the manual (glibc manual) and the shorter form API references (linux man pages) up to date with the most accurate information we have.

Where you find issues with the manual or the linux man pages please reach out to discuss the issue.

What other sources of documentation about glibc are available?

The glibc manual is part of glibc, it is also available online.

The Linux man-pages project documents the Linux kernel and the C library interfaces.

The Open Group maintains the POSIX Standard which is the authoritative reference for the POSIX APIs.

The ISO JTC1 SC22 WG14 maintains the C Standard which is the authoritative reference for the ISO C APIs.

The official home page of glibc is at http://www.gnu.org/software/libc.

The glibc wiki is at http://sourceware.org/glibc/wiki/HomePage.

Gnu C Library On Computer

For bugs, the glibc project uses the sourceware bugzilla with component 'glibc'.

Miscellaneous

How can I set the timezone correctly?

You first have to install yourself the timezone database, it is hosted at http://www.iana.org/time-zones.

Then, simply run the tzselectshell script, answer the question and use the name printed in the end by making a symlink /etc/localtime pointing to /usr/share/zoneinfo/NAME (NAME is the returned value from tzselect). That's all. You never again have to worry. Instead of the system wide setting of /etc/localtime}}, you can also set the {{{TZ environment variable.

The GNU C Library supports the extented POSIX method for setting the TZ variable, this is documented in the manual.

How can I find out which version of glibc I am using in the moment?

If your system includes a package manager, asking it what's installed is typically best.

If you want to find out about the version from the command line simply run the libc binary. This is probably not possible on all platforms but where it is simply locate the libc shared library and start it as an application. On Linux it would look like this:

This will produce all the information you need. On 64-bit systems, you may need to use /lib64/libc.so.6 instead.

Most Linux systems install the libc binary as a symbolic link, which also gives some hints:

What always will work is to use the API glibc provides. Compile and run the following little program to get the version information:

This interface can also obviously be used to perform tests at runtime if this should be necessary.

Gnu

Context switching with setcontext() does not work from within signal handlers.

XXX: Is the following still correct?

The Linux implementations (IA-64, S390 so far) of setcontext()supports synchronous context switches only. There are several reasons for this:

  • UNIX provides no other (portable) way of effecting a synchronous context switch (also known as co-routine switch). Some versions support this via setjmp()/longjmp() but this does not work universally.

  • As defined by the UNIX '98 standard, the only way setcontext() could trigger an asychronous context switch is if this function were invoked on the ucontext_t pointer passed as the third argument to a signal handler. But according to draft 5, XPG6, XBD 2.4.3, setcontext() is not among the set of routines that may be called from a signal handler.

  • If setcontext() were to be used for asynchronous context switches, all kinds of synchronization and re-entrancy issues could arise and these problems have already been solved by real multi-threading libraries (e.g., POSIX threads).

  • Synchronous context switching can be implemented entirely in user-level and less state needs to be saved/restored than for an asynchronous context switch. It is therefore useful to distinguish between the two types of context switches. Indeed, some application vendors are known to use setcontext() to implement co-routines on top of normal (heavier-weight) pre-emptable threads.

It should be noted that if someone was dead-bent on using setcontext()on the third arg of a signal handler, then IA-64 Linux could support this via a special version of sigaction() which arranges that all signal handlers start executing in a shim function which takes care of saving the preserved registers before calling the real signal handler and restoring them afterwards. In other words, we could provide a compatibility layer which would support setcontext() for asynchronous context switches. However, given the arguments above, I don't think that makes sense. setcontext() provides a decent co-routine interface and we should just discourage any asynchronous use (which just calls for trouble at any rate).

What are the accuracy goals for libm functions?

See a libc-alpha message discussing these goals in details. Except for functions such as sqrt, fma and rint that are specified to be bound to particular IEEE 754 operations and that have results (including exceptions raised) that are fully defined to be correctly rounding for all rounding modes, libm functions are not intended to be correctly rounding, are not intended to have errors below 1ulp (may have errors of up to a few ulp on some inputs), and are not intended to be monotonic on regions where the underlying mathematical function is monotonic. A draft set of C bindings to IEEE 754-2008 is under development, part of which (TS 18661-4) is expected to define standard names such as crsin for correctly rounding functions, and in future glibc may provide some such functions under such names.

Why are libm functions slow on some inputs?

The GNU C Library includes a math library that contains a considerable amount of code donated by IBM. The IBM code uses specialized algorithms to compute approximate results for a given input to a specific mathematical function. In some cases during the computation of intermediate results higher precision is required in order to provide an accurate final result. There is in fact a lot of academic research attempting to prove the maximum precision required from intermediate results to produce an output of a given precision (these proofs are generally per function). If the intermediate result requires a higher precision than is available in hardware the function simulates the required precision using what is called integer multi-precision. If you need 100-bits, you gang together enough integers to simulate 100-bits and operate on those larger numbers using the same specialized algorithms to attain a result. Eventually the 100-bit result is rounded down to the size of float, double, or long double depending on the function called. Thus the input of the function may require higher precision intermediate calculations that may in turn use slower integer multi-precision values to compute an accurate result. Without a higher intermediate precision the accuracy of the functions would be terrible. You can detect if you are calling the slow path by using the libm systemtap probe points for the slow paths in several libm functions. It is our expectation that you can then use the probe trigger information to tweak your code to avoid slow paths. The community is looking at providing an alternate implementation of libm that is faster, perhaps selected by -ffast-math, which skips the slow paths at the expense of accuracy and provides faster results.

Why no strlcpy / strlcat?

The strlcpy and strlcat functions have been promoted as a way of copying strings more safely, to avoid buffer overruns when retrofitting large bodies of existing code without understanding the code in detail. Annex K of the C11 standard defines optional functions strcpy_s and strcat_s that serve a similar need, albeit less efficiently and with different calling conventions. Unfortunately, in practice these functions can cause trouble, as their intended use encourages silent data truncation, adds complexity and inefficiency, and does not prevent all buffer overruns in the destinations. New standard library functions should reflect good existing practice, and since it is not clear that these functions are good practice they have been omitted from glibc.

Compiling with gcc -D_FORTIFY_SOURCE can catch many of the errors that these functions are supposed to catch, without having to modify the source code. Also, if efficiency is not paramount the snprintf function can often be used as a portable substitute for these functions.

How do I build a binary that works on older GNU/Linux distributions?

(with the answer pointing to LSB, with information about distro LSB packages).

How do I build glibc on Ubuntu (list other distros here with similar problems)?

Some distribution compilers by default have -fstack-protector enabled. The GNU C library cannot be compiled with it and thus you need to add to CFLAGS '-fno-stack-protector -U_FORTIFY_SOURCE'.

Gnu C Tutorial

After installation of glibc 2.15, I cannot compile GCC anymore

Advise: However, it may be useful to have a similar new question regarding the siginfo_t changes and libgcc build failures - existing GCC releases (predating Thomas's patches) won't build with current glibc because of that.