Skip to content

Issue when generating a stager #1916

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
epsilon1000101 opened this issue Mar 27, 2025 · 6 comments
Open

Issue when generating a stager #1916

epsilon1000101 opened this issue Mar 27, 2025 · 6 comments

Comments

@epsilon1000101
Copy link

I am using sliver on a kali VM, I installed it with apt install. Everything works fine, I managed to get session. But when I tried with a stager, I run this command:

generate stager --lhost 10.0.0.8 --lport 8080 --protocol http

And it keeps saying that it's generating for like 15min without giving any output or error.

Anybody had this issue?

@4Kp3n
Copy link

4Kp3n commented Mar 27, 2025

Just verified on a fresh Kali installation (2025.1a, HyperV pre-build VM, full apt upgrade)

Installed sliver via curl oneliner (curl https://sliver.sh/install|sudo bash)

Same issue, generating a stager doesn't finish (generate stager --lhost 172.17.117.184 --lport 8443 --arch amd64 --format c --save /tmp)

@j3r1ch0
Copy link

j3r1ch0 commented Mar 29, 2025

I'm having the same issue. I even tried building from source and downloading directly from the release page instead of using the install script. It doesn't work any way I try it. Error message is "[!] Error: rpc error: code = Unknown desc = exit status 1 - Please make sure Metasploit framework >= v6.2 is installed and msfvenom/msfconsole are in your PATH"

I verified msfvenom and msfconsole are both in my PATH.

Result of msfconsole --version command is "Framework Version: 6.4.54-dev"

Would really appreciate some guidance on this. Currently bypassing this requires generating the shell code directly from msfvenom, which is workable but also a bit of a pain.

A little additional info, I'm running Kali on bare metal via a Thinkpad T series, architecture is amd64.

@lypd0
Copy link

lypd0 commented Apr 25, 2025

Having the same issue unfortunately. Has anybody looked into this yet?

@TheZenTester
Copy link

I'm having the same issue. Seems to have gone unnoticed for over a month, so will try to get in the good graces of the devs by giving a full summary of the issue.
Issue
When running generate stager from a Kali VM, it states "Generating payload" but in fact never completes the payload generation. Commands previously worked. Haven't gone any upgrades or version changes of either msfvenom or sliver.

System Info
Kali VM running on ARM64

Linux zenkali 6.12.20-arm64 #1 SMP Kali 6.12.20-1kali1 (2025-03-26) aarch64 GNU/Linux

Sliver Version
Installed via apt install sliver in Kali.

[*] Client 1.5.42 - kali - linux/arm64
    Compiled at 2024-03-01 03:50:00 -0800 PST
    Compiled with go version go1.21.7 linux/arm64


[*] Server v1.5.42 - kali - linux/arm64
    Compiled at 2024-03-01 03:50:00 -0800 PST

Command


 ⠹  Generating stager, please wait ...^C

Sliver Logs

Logs prior to killing process.

INFO[2025-04-29T07:48:24-07:00] [sliver/server/daemon/daemon.go:51] No cli lport, using config file or default value
INFO[2025-04-29T07:48:24-07:00] [sliver/server/daemon/daemon.go:55] Starting Sliver daemon :31337 ...
INFO[2025-04-29T07:48:24-07:00] [sliver/server/transport/mtls.go:52] Starting gRPC  listener on :31337
INFO[2025-04-29T07:48:24-07:00] [sliver/server/certs/certs.go:98] Getting certificate ca type = operator, cn = 'server.multiplayer'
INFO[2025-04-29T07:48:24-07:00] [sliver/server/certs/certs.go:98] Getting certificate ca type = operator, cn = 'server.multiplayer'
INFO[2025-04-29T07:48:24-07:00] [google.golang.org/grpc@v1.55.0/internal/grpclog/grpclog.go:37] [core] [Server #1] Server created
INFO[2025-04-29T07:48:24-07:00] [google.golang.org/grpc@v1.55.0/internal/grpclog/grpclog.go:37] [core] [Server #1 ListenSocket #2] ListenSocket created
INFO[2025-04-29T07:48:38-07:00] [github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/logrus/options.go:220] finished unary call with code OK
INFO[2025-04-29T07:48:56-07:00] [github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/logrus/options.go:220] finished unary call with code OK
INFO[2025-04-29T07:49:03-07:00] [sliver/server/msf/msf.go:201] msfvenom [--platform windows --arch x64 --format ps1 --payload windows/x64/meterpreter/reverse_tcp LHOST=192.168.1.5 LPORT=8444 EXITFUNC=thread]

Logs after killing process:

INFO[2025-04-29T08:01:02-07:00] [google.golang.org/grpc@v1.55.0/internal/grpclog/grpclog.go:37] [transport] [server-transport 0x40001024e0] Closing: EOF
INFO[2025-04-29T08:01:02-07:00] [sliver/server/rpc/rpc-events.go:23] 1 client disconnected
INFO[2025-04-29T08:01:02-07:00] [google.golang.org/grpc@v1.55.0/internal/grpclog/grpclog.go:37] [transport] [server-transport 0x40001024e0] loopyWriter exiting with error: transport closed by client
INFO[2025-04-29T08:01:02-07:00] [github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/logrus/options.go:220] finished streaming call with code OK
WARN[2025-04-29T08:01:02-07:00] [sliver/server/rpc/rpc-tunnel.go:126] Error on stream recv rpc error: code = Canceled desc = context canceled
INFO[2025-04-29T08:01:02-07:00] [github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/logrus/options.go:220] finished streaming call with code Canceled

msfvenom

which msfvenom
/usr/bin/msfvenom

Also tried taking the command and running verbatim as in the logs (sans brackets):
Truncated for brevity.

msfvenom --platform windows --arch x64 --format ps1 --payload windows/x64/meterpreter/reverse_tcp LHOST=192.168.1.5 LPORT=8444 EXITFUNC=thread
No encoder specified, outputting raw payload
Payload size: 511 bytes
Final size of ps1 file: 2507 bytes
[Byte[]] $buf =

@hellocharli
Copy link

I'm having the same issue. It looks like it's been an issue since #1289 and there have been several reports of it in the past. Something I didn't see mentioned in the other reports is that if you stop the systemd service and start sliver-server manually as root (even in daemon mode), you can generate stagers just fine. It seems to only be a problem when run on Kali AND as a systemd service.

I'll throw my logs onto the pile:

uname

charlie@kali ~> uname -a
Linux kali 6.12.25-amd64 #1 SMP PREEMPT_DYNAMIC Kali 6.12.25-1kali1 (2025-04-30) x86_64 GNU/Linux

standard kali install in a Proxmox VM. Sliver installed earlier today with curl https://sliver.sh/install | sudo bash

Version + client output

charlie@kali ~> sliver
Connecting to localhost:31337 ...
[*] Loaded 3 aliases from disk
[*] Loaded 1 extension(s) from disk

.------..------..------..------..------..------.
|S.--. ||L.--. ||I.--. ||V.--. ||E.--. ||R.--. |
| :/\: || :/\: || (\/) || :(): || (\/) || :(): |
| :\/: || (__) || :\/: || ()() || :\/: || ()() |
| '--'S|| '--'L|| '--'I|| '--'V|| '--'E|| '--'R|
`------'`------'`------'`------'`------'`------'

All hackers gain infect
[*] Server v1.5.43 - e116a5ec3d26e8582348a29cfd251f915ce4a405
[*] Welcome to the sliver shell, please type 'help' for options

sliver > generate stager -f ps1 -L 10.20.20.8

[!] Error: rpc error: code = Unknown desc = exit status 1 - Please make sure Metasploit framework >= v6.2 is installed and msfvenom/msfconsole are in your PATH

Log output

INFO[2025-06-03T22:03:20-07:00] [sliver/server/msf/msf.go:201] msfvenom [--platform windows --arch x64 --format python --payload windows/x64/meterpreter/reverse_tcp LHOST=10.20.20.8 LPORT=8443 EXITFUNC=thread]
INFO[2025-06-03T22:04:13-07:00] [sliver/server/msf/msf.go:208] /usr/bin/msfvenom --platform windows --arch x64 --format python --payload windows/x64/meterpreter/reverse_tcp LHOST=10.20.20.8 LPORT=8443 EXITFUNC=thread
INFO[2025-06-03T22:04:13-07:00] [sliver/server/msf/msf.go:210] --- stdout ---

INFO[2025-06-03T22:04:13-07:00] [sliver/server/msf/msf.go:211] --- stderr ---
<internal:dir>:184:in `open': Too many levels of symbolic links @ dir_initialize - /usr/lib/llvm-19/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Release/build/Debug+Asserts/build/Debug+Asserts/build/Release/build/Release/build/Release/build/Release/build/Release/build/Debug+Asserts/build/Release/lib/clang/19/include/cuda_wrappers/bits (Errno::ELOOP)
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `foreach'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `walk'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:60:in `block in walk'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `foreach'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `walk'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:60:in `block in walk'
        <-- this repeated a lot -->
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `foreach'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:50:in `walk'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path_scanner.rb:38:in `call'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path.rb:93:in `scan!'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/path.rb:77:in `entries_and_dirs'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:172:in `block (2 levels) in unshift_paths_locked'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:165:in `reverse_each'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:165:in `block in unshift_paths_locked'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/store.rb:53:in `block in transaction'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/store.rb:52:in `synchronize'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/store.rb:52:in `transaction'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:164:in `unshift_paths_locked'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:113:in `block in unshift_paths'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:113:in `synchronize'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/cache.rb:113:in `unshift_paths'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/change_observer.rb:22:in `unshift'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/railties-7.1.5.1/lib/rails/application.rb:396:in `add_lib_to_load_path!'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/railties-7.1.5.1/lib/rails/application.rb:76:in `inherited'
        from /usr/share/metasploit-framework/config/application.rb:37:in `<module:Framework>'
        from /usr/share/metasploit-framework/config/application.rb:36:in `<module:Metasploit>'
        from /usr/share/metasploit-framework/config/application.rb:35:in `<top (required)>'
        from /usr/lib/ruby/3.3.0/bundled_gems.rb:69:in `require'
        from /usr/lib/ruby/3.3.0/bundled_gems.rb:69:in `block (2 levels) in replace_require'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
        from /usr/share/metasploit-framework/config/environment.rb:2:in `<top (required)>'
        from /usr/lib/ruby/3.3.0/bundled_gems.rb:69:in `require'
        from /usr/lib/ruby/3.3.0/bundled_gems.rb:69:in `block (2 levels) in replace_require'
        from /usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
        from /usr/share/metasploit-framework/lib/msfenv.rb:28:in `<top (required)>'
        from <internal:/usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb>:136:in `require'
        from <internal:/usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb>:136:in `require'
        from /usr/bin/msfvenom:27:in `require_deps'
        from /usr/bin/msfvenom:44:in `init_framework'
        from /usr/bin/msfvenom:67:in `framework'
        from /usr/bin/msfvenom:472:in `<main>'

INFO[2025-06-03T22:04:13-07:00] [sliver/server/msf/msf.go:212] exit status 1
WARN[2025-06-03T22:04:13-07:00] [sliver/server/rpc/rpc-msf.go:197] Error while generating msf payload: exit status 1
ERRO[2025-06-03T22:04:13-07:00] [github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/logrus/options.go:224] finished unary call with code Unknown

I tried adding these lines to the service file to try and disable bootsnap but it didn't work

Environment="BOOTSNAP_IGNORE_DIRECTORIES=/usr/lib/llvm-19"
Environment="DISABLE_BOOTSNAP=1"

@TheZenTester
Copy link

TheZenTester commented Jun 4, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants