GT.M on Linux supports using huge pages (depending on the CPU architecture, typically 2MiB or 4MiB rather than the default 4KiB) for shared memory segments for BG access and for the heap of mumps processes. Using huge pages reduces the need for page tables at the cost of a slight increase in total system virtual memory usage. While your mileage may vary, FIS believes that many large applications will benefit from configuring and using huge pages. If you do not configure your system for huge pages, GT.M continues to use the default page size. Note that using huge pages requires no change whatsoever at the application code level, or even to database management operations such as replication, backup, integ, reorg, etc. - the changes discussed here pertain entirely to configuring system memory and the virtual memory manager to improve application performance.
To use huge pages:
You must have a 32- or 64-bit x86 CPU, running a Linux kernel with huge pages enabled. All currently Supported Linux distributions appear to support huge pages; to confirm, use the command: grep hugetlbfs /proc/filesystems which should report: nodev hugetlbfs
You must have libhugetlbfs.so installed in a standard location for system libraries. Installing the library using your Linux system's package manager should place it in a standard location. (Note that libhugetlbfs is not in Debian repositories and must be manually installed; GT.M on Debian releases is Supportable, not Supported.)
You must have a sufficient number of huge pages available:
To reserve Huge Pages boot Linux with the hugepages=num_pages kernel boot parameter; or, shortly after bootup when unfragmented memory is still available, with the command: hugeadm --pool-pages-min DEFAULT:num_pages command
For subsequent on-demand allocation of Huge Pages, use: hugeadm --pool-pages-max DEFAULT:num_pages or set the value of /proc/sys/vm/nr_overcommit_hugepages. These delayed (from boot) actions do not guarantee availability of the requested number of huge pages; however, they are safe as, if a sufficient number of huge pages is not available, Linux simply uses traditional sized pages.
To use huge pages for shared memory segments for the BG database access mode, both of the following are required:
Permit GT.M processes to use huge pages for shared memory segments (where available, FIS recommends 1, below; however not all file systems support extended attributes). Either:
set the CAP_IPC_LOCK capability needs for your mumps, mupip and dse processes with a command such as setcap 'cap_ipc_lock+ep' $gtm_dist/mumps, or
permit the group used by GT.M processes needs to use huge pages with, as root, echo gid >/proc/sys/vm/hugetlb_shm_group.
Set the environment variable HUGETLB_SHM for each process to yes.
To use huge pages for process working space and dynamically linked code:
Set the environment variable HUGETLB_MORECORE for each process to yes.
Although not required to use huge pages, your application is also likely to benefit from including the path to libhugetlbfs.so in the LD_PRELOAD environment variable.
If you enable huge pages for all applications (by setting HUGETLB_MORECORE, HUGETLB_SHM, and LD_PRELOAD as discussed above in /etc/profile and/or /etc/csh.login), you may find it convenient to suppress warning messages from common applications that are not configured to take advantage of huge pages by also setting the environment variable HUGETLB_VERBOSE to zero (0).
At this time, huge pages cannot be used for MM databases; the text, data, or bss segments for each process; or for process stack.
Refer to the documentation of your Linux distribution for details. Other sources of information are:
![]() |
Note |
---|---|
|
![]() |
Important |
---|---|
A fatal SIGBUS error occurs when a GT.M process using huge pages tries to spawn another process and Linux does not have sufficient available huge pages to provide the required memory for the spawned process. The following involve spawning additional processes:
Depending on when huge pages become unavailable, the SIGBUS error may go to the initiating process or may only prevent the new process from starting. The solution is to tune Linux to ensure that the number of huge pages required by your application are always available to your application. Because such tuning may require a reboot, an interim workaround is to unset the environment variable HUGETLB_MORECORE for GT.M processes until you are able to reboot or otherwise make available an adequate supply of huge pages. |