30.11.2021

Adventures in developing a cross-platform kernel

Now our geometric kernel can be used for software development on a wide range of operating systems: Windows, MacOS and iOS, FreeBSD, and several Linux distributions (Ubuntu, Debian, Scientific). The kernel’s SDK (software development kit) also supports a wide variety of compilers: MSVC 2012-2019, GCC 4.8-7.2, and Clang 6.0-10.0.

This wasn’t always the case. In 2012, when the C3D kernel was first extracted from the KOMPAS-3D CAD system as a separate product, it worked only with versions of the MSVC (Microsoft Visual C) compiler and, of course, ran only on Windows. But as we developed the kernel over time, requirements and wish lists began to arrive from customers, which we could not ignore if we wanted to have a truly best product in its class. Below is the story about how we ported the kernel to the many OSes and platforms we support today.

Adventures in developing a cross-platform kernel, photo 1

Work environment

When we initially began building and debugging the library on other operating system, we honestly didn’t use cross-platform compilation, emulation, or other tricks: we took the target platform, set up the working environment, and installed the build tools and dependencies. Our prime concern at the time was checking the correctness of the work in the environment for which the assembly was created.

The first obstacles arose early at this stage. For instance, there’s no required version of the compiler in the distribution of the operating system, and so we had used a version of CMake lower than the required one; determined the paths to the dependencies incorrectly; and so on. Fortunately, there were no pitfalls from these errors, but we learned from this that we did have to tinker.

The next step was generating a project using CMake. This is where the next batch of problems began, as some environments require specific CMake settings. In particular, this situation arose on MacOS, where we had to create a whole block of precise RPATH settings. As it turned out, the behavior of the linker on MacOS differs from the standard one in the sequence of paths for which it searches dependent libraries. Various compilation flags are also configured through CMake, but the need for flags is usually determined by the later steps.

The next step was to actually build the library. The problems that we encountered at this stage came from two sources: the operating system, and the compiler.

Differences by operating system

Paths to the standard libraries mainly depend on the operating system being used, and the implementation of the standard C and C++ libraries libc/stdlibc++. For example, when doing low-level debugging of memory allocation in kernel code, functions are defined in different files:

  • malloc / malloc.h in one operating system
  • stdlib.h in another one
  • malloc.h in a third

Or, different functions may be called, again depending on the OS. Such problems were solved by using preprocessor macros, such as this one:

#if defined(C3D_MacOS)
#include <malloc/malloc.h>
#elif defined(C3D_FreeBSD)
#include <stdlib.h>
#else
#include <malloc.h>
#endif

As the geometric kernel is responsible for reading and writing data, we had to take into account the peculiarities of systems to correctly open a file at a given line. Some systems use the WCHAR representation of paths, others use TCHAR. And the calls for read/write operations are located in different header files. A more serious file-access problem is that data (including strings) must be read in the same way that they are written to the file, regardless of the operating system and its address-size. As you may know, the size of the wchar_t type, however, depends on the platform, and the standard string itself depends on compilation options.

For the convenience of our developers, we created a custom type for working with strings, and so methods were provided for converting c3d strings to std::string and back, for paths, and so on, thereby hiding preprocessor directives.

In a similar way, we solved the problem of built-in data types that allow different sizes, depending on the platform.

Differences by compiler

Compiler dependence manifests itself mainly in how things differ in supporting standards by different compilers and different processing of expressions that aren’t specified in a standard. So, during the first stage of porting the code to Linux, we faced the problem that the code, written not quite according to the standard, worked great with the Microsoft compiler but did not want to compile with GCC at all. We had to fix it, and then draw conclusions from our errors, and pay greater attention to the standards of the programming language.

Since then, we have offered our customers a wide range of compilation tools, even as we maintain compatibility with old C ++ language standards, even though writing all that code in the old standard is also wrong. So, we had to find a compromise. Our developers did a lot of work in identifying the code in various standards, such as c++ 17, c++ 14, c++ 11, and earlier, and then introduced a mechanism that allows us to write universal code.

Here is an example. The constexpr specifier has been supported since the c++ 11 standard. In modern code, the following notation is valid:

constexpr size_t VAR = 100;

The old standard, however, knows nothing about the constexpr keyword, and so we needed to write it like this and nothing else:

const size_t VAR = 100;

Our solution was to define standards-specific macros and standards-specific preprocessor macros:

#ifdef C3D_STANDARD_CXX_11
  #define c3d_constexpr  constexpr
#else
  #define c3d_constexpr  const
#endif

Now, with the correct definition of the C3D_STANDARD_CXX_11 parameter, the c3d_constexpr size_t VAR = 100; code works for any standard we employ, and the developer does not need to think about it or clutter the code with preprocessor directives; this is done once. By the way, the macro appropriate for the standard is determined automatically by requesting from the compiler which standards it supports.

Multithreading took a special place in developing our C3D library. We use OpenMP to implement multithreading. However, not all operating systems include OpenMP in their base distribution, so we had to write code that would be universally suitable for building with and without the «openmp» option.

In general, compilation problems arise mainly when porting to a system or platform that is completely new to us. At the moment, the assembly for Linux is adjusted in such a way that adding another distribution kit or compiler to the list of supported ones is no longer a problem.

Checking results

Once we completed the build and received the library file, it was time to test the result. Tests reveal non-obvious errors in the source code, causing the application to not operate correctly. Unfortunately, we have not yet come up with a general method for eliminating such errors, and so each of them is solved on an individual basis.

There is another situation where the result is slightly different from what we expected. A geometric kernel built on its own complex algorithms and computational methods, as well as using the capabilities of the STL (standard template library) and built-in mathematical functions, is a rather fragile product. It is practically impossible to provide an absolutely identical result of a library assembled from different compilers. Therefore, we consider the result to be satisfactory when it coincides with a certain standard with a given accuracy.

All the steps outlined above are performed only once manually, after which a scheme is established for automatically building and testing the C3D kernel. Build automation for Linux systems is based on Docker technology; thus, when we need to add support for the next compiler with a specific system environment, we create a container with the base system, install all dependencies into it, configure it, and then launch it. Builds for Windows, MacOS, and FreeBSD run on the target operating systems.

We plan to implement a cross-platform assembly, which should speed up the process, as it will allow us to use more powerful machines, but certainly expect that there will be pitfalls along the way!

Distributing kernels to end users

Over the past few years, the product distribution chain has also changed, largely due to the implementation of the cross-platform function. When once we had a single machine for building and for testing, now we are talking about building a library on several machines in dozens of different configurations.

As a continuous integration tool, we use BuildBot, which launches all the necessary builds upon a signal from the source control system, and then performs various tests upon completion, from unit testing to lengthy regression tests. So far, this scheme has justified itself by significantly reducing the time it takes us to distribute new kit to end users.

In conclusion, we note that we don’t stop once we achieve one result. The list of supported systems is constantly expanding, the quality of the code is improving, and the process of distributing the kernel to the user is accelerating.


Share
Up