Packaging llmster with Nix
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar.
Alle BeiträgeAs part of LM Studio 0.4, a new headless server called llmster has been released, intended to be run on servers without GUIs.
Just like many tools distributed as binary releases, llmster is intended to be installed through a bash script:
$ curl -fsSL https://lmstudio.ai/install.sh | bash
People using NixOS or the Nix package manager cannot easily use such installation methods—for instance, due to missing dynamic linking of libraries.
Besides this installation script, little information is provided about the llmster binary; only some instructions on how to create a systemd service are available.
Inspecting the installation script, we observe that it downloads a tarball as well as sha512 checksums from an S3 bucket with the following content:
$ tree -a -F -L 2 .
├── .bundle/
│ ├── bin/
│ ├── bundled-plugins/
│ ├── daemon/
│ ├── daemon-mac-updater.sh*
│ ├── lib/
│ └── lms*
└── llmster*
The llmster binary is compiled via Bun, a JavaScript runtime similar to Node.js, and it depends on the files in .bundle to be available in the same directory.
A symlink can then be created to make the binary available when installed system-wide.
Precompiled binaries like this need to be patched via patchelf on Linux to resolve their dynamic library dependencies correctly.
This usually involves stripping them as well (i.e., removing unnecessary RPATH entries), but Bun-compiled ones become non-functional when stripped, so we need dontStrip here.
nativeBuildInputs = [
makeBinaryWrapper
]
++ lib.optionals stdenv.hostPlatform.isLinux [
autoPatchelfHook
];
installPhase = ''
runHook preInstall
mkdir -p $out/libexec
mv llmster .bundle $out/libexec/
makeWrapper $out/libexec/llmster $out/bin/llmster
runHook postInstall
'';
Another challenge involves the GPU support provided by llmster.
Nixpkgs provides an autoAddDriverRunpath hook to inject the required drivers during runtime, but this also involves stripping under the hood, meaning we need to do this manually.
We not only search for .so files but also .node files, as these can contain native code as well.
nativeBuildInputs = lib.optionals stdenv.hostPlatform.isLinux [
addDriverRunpath
];
postFixup = lib.optionalString stdenv.hostPlatform.isLinux ''
find $out/libexec/.bundle -type f \( -name '*.so' -o -name '*.so.*' -o -name '*.node' \) | while read -r lib; do
addDriverRunpath "$lib"
done
'';
Finally, we need to ignore missing references to the actual GPU driver, which is not known during build time.
autoPatchelfIgnoreMissingDeps = [
"libcuda.so.1"
];
The complete package can be found in the corresponding pull request.
I have tested it on a CUDA-capable machine and was able to run llmster successfully with GPU support, but this is still a work in progress.
In its current state, running the llmster binary creates the directory .lmstudio as well as the file .lmstudio-home-pointer in the user’s home.
$ tree -a -F -L 1 .lmstudio
.lmstudio/
├── bin/
├── config-presets/
├── conversations/
├── credentials/
├── dev-logs/
├── extensions/
├── hub/
├── .internal/
├── mcp.json
├── models/
├── projects/
├── server-logs/
├── settings.json
├── user-files/
└── working-directories/
While in theory it is possible to create these directly inside the Nix store, it would be impractical due to the store’s immutability—for instance, users would be unable to download models. As such, we might need to accept this stateful behavior for now. Please help test the PR and provide feedback for improvements!