output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Are you certain the docker host is listening on port 80? It might be redirected from port 80 to whatever port it is listening on using the built-in firewall.
If you are running IPTABLES, you could check this by using:
iptables -L -t natYou would then see a chain named DOCKER which will tell you what redirects are in place, similar to this:
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
RETURN all -- anywhere anywhere
DNAT tcp -- anywhere anywhere tcp dpt:http-alt to:172.17.0.3:80
DNAT tcp -- anywhere anywhere tcp dpt:4433 to:172.17.0.3:443
DNAT tcp -- anywhere anywhere tcp dpt:1688 to:172.17.0.4:1688 |
I am working with CentOS 7 OS hosting a set of docker containers.
By using a web browser I can reach a service on port 80 and I get back a response.
A bit of local knowledge helps me understand that the response comes from one of the docker containers.
However, I have a big problem: I can't seem to find a way for the OS to indicate that port 80 is open.
Here is what I have tried (all with root user):netstat -tulnp | grep 80 lists nothing listening on port 80
ss -nutlp | grep 80 lists nothing listening on port 80
lsof -i -P | grep 80 also lists nothing listening on port 80
wget 127.0.0.1 successfully fetches index.htmlInterrogating Docker directly through docker ps is not really the answer I am looking for, because we must be able to interrogate the OS and see what process is responsible for treating requests to port 80. It's also not helpful, because docker ps returns several containers that have the following entry in the PORTS column:
PORTS
80/tcp
8080/tcp
80/tcpAgain, I don't want to go to docker for answers, because there must be a way to interrogate the OS and identify the process responsible for handling port 80.
My only guess is that docker installs some sort of low-level driver that intercepts such network requests.
Any suggestions on how to get CentOS to hand out this information, accompanied with command line commands, would be greatly appreciated!
| How can I get CentOS to correctly list open ports? |
Shortened solution:
netstat -lpunt | awk -F' +|:+|/' '$5{print $5,$10}' | sort -n-F' +|:+|/' - field separator (space(s), colon(s) or slash)
$5 - port number
$10 - program name |
I want to print 2 fields together, all open ports and the application using it. This is my command but it only prints the port numbers and still missing the program field:
netstat -lnt -u -p | awk '{print $4}' | sed 's/.*://' | sort -n | uniqHow can I modify this to print also the Program name as in "PID/Program name" will return "java"
A sample of "netstat -lnt -u -p" looks like this:
tcp , 0 , 0 , 10.194.194.21:36195 , 0.0.0.0:* , LISTEN , 2969/java
And I want to see only the port number and the program name:
36195 java | Print ports with the application using it |
I just simulated your scenario and was able to get 8081 in both netstat and lsof. lsof -i displays 8081 as tproxy and so your grep might not be finding it. Try this with -P which shows the numerical ports:
lsof -i -P | grep 8081 |
A strange situation. I started
telnet 0 8081and lsof -i (run under root) doesn't list this connection, but netstat -n does.
Why can this be?
| Why can `lsof -i` not show an open connection which `netstat -n` lists? |
to avoid SSH password promts:
sudo apt-get install sshpass
An alternative tool for package installation is dpkg
download the sshpass deb packet
and install it:
sudo dpkg -i sshpass_1.04-1_amd64.deb
pattern to use as follows:
sshpass -p mypassword ssh user@server
if needed to avoid sudo password promt:
ssh [emailprotected] "echo sudo_password | sudo -S ./script.sh"
The explanation for the last one is: having sudo run after ssh, it never gets a password input for sudo on the remote server, so the solution is use -S and pipe a password for sudo as above.
|
I have two remote machines, I'm running a script on one of them.
There's some part of the script that should be running on the other one, then the script will continue in its further tasks/commands.
For some reasons I cannot establish a ssh-without-password connection, additionally, I'd not want any password prompts.
N.B: I have shared mount between them.
| Connectivity between 2 remote Linux machines |
netstat doesn't accept an IP address argument. The only non-option argument is a delay, and that's not in all versions.
The command willPrint network connections, routing tables, interface statistics, masquerade connections, and multicast memberships for the local machine. It doesn't have any access to data about other machines, only connections to and from the machine it's running on. It can take a while to do that, and loop forever with a delay.
If you're interested in information about connections from your machine to a particular other IP address you can use grep for it. Note that netstat is deprecated in any case, and its replacement ss has better inbuilt support for that use case.
|
My remote Linux machine is able to ping an IP address on an internet, but the same action is unable to get the netstat report for that IP.
ping a.b.c.d
64 bytes from a.b.c.d: icmp_seq=1 ttl=64 time=0.509 ms
64 bytes from a.b.c.d: icmp_seq=2 ttl=64 time=0.249 ms
64 bytes from a.b.c.d: icmp_seq=3 ttl=64 time=0.273 ms
64 bytes from a.b.c.d: icmp_seq=4 ttl=64 time=0.357 msbut netstat is unable to get the output of,
netstat -an a.b.c.d | grep <some port>it's just getting hang and not even returning my prompt, while I already have been sure about the connections it should list.
| Why is my Netstat not returning the desired output and just getting hung up? |
I couldn't resolve what was trying to connect, but what killed whatever process was trying to connect was simply ifconfig interface_name down.
|
I have several problems:some process is attempting to send data and the firewall is rejecting it at the rate it is sending it out
firewall logs flood the system (may need to rate-limit the logging)
lsof -i :port does not list the process, but there has to be something causing the packets to keep being sent. netstat -patune lists it in the SYN_SENT state not listeningThe port that it is using does not make sense to me so that is one oddity and the other being how traffic continues to be sent.
| what process is listening on a given port |
iptables does not prevent applications from opening TCP or UDP ports (because that could cause the applications to crash). Instead, it will prevent incoming packets from reaching the actual ports. If applied to outgoing traffic, it can stop the matching packets from being actually sent.
If you use iptables ... -j DROP, the processing of any packets matching the rule is just stopped short, effectively causing the packet to be ignored with no response whatsoever. For legitimate TCP connections, this typically causes the sender to hang until the connection times out. But since the host must still answer to ARP requests, a scanner can still detect the presence of the host and can mark the port as "firewalled".
(If the scanner is not in the same network segment, it will not be able to directly observe ARP responses, but it can deduce their presence/absence by seeing whether or not the router of the target segment responds to connection attempts with ICMP "Host unreachable" error messages.
No "Host unreachable" errors + no TCP Resets/ICMP "Port unreachable"s = port is most likely firewalled.)
If you use iptables ... -j REJECT, any packets matching the rule will be processed as if the destination port in question was never opened, regardless of the actual state of the port. This usually causes a TCP Reset or ICMP Error response packet to be sent back to the sender of the original packet, just like when attempting to connect to a closed port that has no firewall. This allows the sender to detect that the connection is being refused, so the connection attempt can fail quicker. For a scanner, such a result should be indistinguishable from a normal "closed" port.
If the sender has forged the source IP address of the original packet, the responses could be abused as part of a denial-of-service attack on another host, but modern OSs will deprioritize and/or rate-limit the sending of TCP Reset/ICMP Error packets by default, so this should not be a major concern.
|
I'm running an Embedded Linux system, whose kernel is 3.18.21, with some applications on top it on MIPS. When I run iptables & ip6tables on the Linux, such as the following:iptables -A INPUT -p tcp --dport 80 -j DROPip6tables -A INPUT -p tcp --dport 80 -j DROPThe tcp port 80 is for http. Then I found that the http connection to this Linux server (there is a web server app running on the Linux server) doesn't work any longer.
But when I run natstat command as the following:netstat -tuln | grep LISTENIt shows the following: (I only extract the port80)tcp 0 0 :::80 :::* LISTENDoes this mean the port 80 is still open? Then why I can't use http to access the web server running on the Linux ( I checked and confirmed that the web server process is still running).
| why iptables commands yield seemingly contradictory results on my embeded Linux? |
Solution found: this command works perfect
watch "ss -o state syn-sent '( dport = :https or sport = :https )'this command also works fine
while true;do sleep 2s && netstat -napotep|grep SYN_SENT; done |
I want to see the state "syn_sent" of socket in realtime during the connection process
ss or netstat or any command
I have tried those commands, but all fail
watch netstat -tnaop|grep -i syn
ss -4 state syn | How to show the "syn_sent" socket state on Linux in realtime? |
The listening socket isn't the one transporting data! the moment a listening socket gets a connection request, the accept() system call can create a new connected socket. the listening socket doesn't transport any data, it just waits for connection requests. the listening socket and the data-transporting sockets are two separate sockets.
Therefore, ss doesn't have much to show.
|
With ss -tuiOp we can view extended stats for an outbound process, e.g.:
tcp ESTAB 0 0 192.168.68.108:32862 52.86.220.33:https
users:(("chrome",pid=13907,fd=44)) cubic wscale:12,7 rto:292 rtt:91.131/1.147 ato:40 mss:1288 pmtu:1500 rcvmss:1288 advmss:1448 cwnd:10 bytes_sent:25761 bytes_retrans:108 bytes_acked:25654 bytes_received:136601 segs_out:1010 segs_in:630 data_segs_out:407 data_segs_in:522 send 1.13Mbps lastsnd:2184 lastrcv:2092 lastack:2092 pacing_rate 2.26Mbps delivery_rate 339kbps delivered:408 app_limited busy:36036ms retrans:0/2 dsack_dups:2 rcv_rtt:33522.9 rcv_space:67624 rcv_ssthresh:225644 minrtt:82.525However, this isn't viewable for listening ports using ss -tuiOpl:
tcp LISTEN 0 64 *:sip *:* users:(("linphone",pid=13355,fd=39)) cubic cwnd:10 Is there a way to get similar stats for listening ports? I'm particularly interested in bytes_sent, bytes_received, lastrcv.
| View extended stats for listening ports (using ss?) |
All of those things look completely normal.
The SSH agent is started on login as part of your graphical desktop. Yes it doesn't care whether you have an Ethernet cable or not (nor should it). Yes it gets a random socket address every time.
The Charon socket will be listening whenever Charon (the strongSwan daemon) is running. Whether it has any connections set up or not is irrelevant – if the overall service was configured to start on boot, it'll start on boot, just like Apache will start even if you don't really have a website yet.
(If you're running Debian it indeed configures any newly installed service to start on boot, whether the admin wants it or not...)
The two UDP listeners are IKE (the IPsec handshake protocol), which is literally Charon's job. If it's running, it'll listen for IKE packets.
ICE-unix sockets are used by the X11 session manager (e.g. gnome-session), as the traditional X11 session management protocol is built on top of "Inter-Client Exchange" IPC system (not to be confused with the ICE from STUN/WebRTC which is a different thing).
The one with a @ prefix is an abstract socket, which doesn't correspond to anything in the filesystem; abstract socket names don't necessarily look like paths at all. X11 uses both regular and abstract sockets for... legacy reasons.
The nameless sockets are client sockets. The listener socket has a path – client sockets don't. But they'll show up in netstat because showing all sockets is what netstat does.
|
On a recently installed Debian system I noticed that every boot, even with no networking, a folder and empty socket file are generated in /tmp/ssh-(random_letters)/agent.xxx.
On every boot the random letters assigned to the folder name and random numbers assigned to the socket file change.
On this system there is no VPN or tunneling setup, but netstat shows a process for /var/run/charon.ctl listening on a unix 2 STREAM.
It also shows ipsec-nat-t and isakmp listening over UDP for local address 0.0.0.0 and foreign address 0.0.0.0:*.
If I run netstat -ap | grep (number assigned to empty socket file) it produces a bunch of matches running on unix 2 and 3.
Examples:
@/tmp/.ICE-unix/(number assigned to socket) x8/tmp/.ICE-unix/(number assigned to socket) x1All of them are preceded by reference to:
(number assigned to socket)/x-session-manager. And then there are another 5 listings same format but no path referenced. All connected, stream. And one more connected for dgram.
Is this normal?
| Random SSH Agent Generates on Boot in tmp Directory Even with Networking Disabled |
Pretty much anything running in a web browser, like youtube, will use only HTTP and HTTP-based protocols like WebSockets. Web browsers forbid non-HTTP connections by websites for security reasons. So you're not usually going to see UDP, except possibly for HTTP-over-QUIC and HTTP 3.
Looking into the network monitor in firefox while while watching a Video on Youtube, I can see a few requests to URLs like https://r4---sn-4g5e6nzz.googlevideo.com/videoplayback, returning webp (video) content.
Also in the firefox network monitor, the remote address is displayed as [2a00:1450:4001:1::9]:443 (IPv6).
Using netstat -tpen I can see a matching connection:
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
[...]
tcp6 0 0 (censored) 2a00:1450:4001:1::9:443 ESTABLISHED 1000 8120905 8478/firefox |
Learning about networks and trying to make sense of netstat output. This is the main confusion:
When I run the command, there are no known URLs. For example, I'm listening a Youtube streaming and was expecting to see UDP for the streaming. Am I expecting something that is not? Any help?
Slice of the output:
tcp 0 0 arco:43424 lhr25s15-in-f1.1e:https ESTABLISHED
udp 0 0 arco:bootpc ttrouter.lan:bootps ESTABLISHED | Netstat doesn't show URLs while streaming from Youtube |
Regarding the 2nd part of your question, netstat -plantu will show you only tcp and udp info, that is network connections established and listening ports. netstat -a will show you unix sockets also. That's lots of info, it's better to target what you need on the output.
If you run a recent distro, you can use ss instead of netstat. It's a modern alternative, takes the same parametres.
I usually type ss -tulp (same as netstat -tulp) to check all listening ports on my servers/PCs plus the processes which opened the ports; the possible incoming traffic will be addressed on this ports. To check the current connections and processes, ss -tuap. For -p you need root/sudo permissions, in order to view processes of all users.
|
I am using Ubuntu 16.04, but I believe my question applies to many distros, such as Debian, CentOS, and Red Hat.
The manpage for netstat -l is:
Show only listening sockets. (These are omitted by default.)and netstat -a is:
Show both listening and non-listening sockets. With the --
interfaces option, show interfaces that are not upDoes the output of netstat -a include the output of nestat -l? It seems like so in the manpage but many websites talk about netstat -plantu so I am wondering if netstat -l covers something that netstat -a does not.
| Does netstat -l include anything that netstat -a does not have? |
check process number using the port ; then with process number you can go and read /proc//sched and get in milliseconds the stats.
root@zaphod:/tmp# netstat -anp | grep -i postgr
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 904/postgres
tcp6 0 0 :::5432 :::* LISTEN 904/postgres
udp6 0 0 ::1:49729 ::1:49729 ESTABLISHED 904/postgres
unix 2 [ ACC ] STREAM LISTENING 13146 904/postgres /var/run/postgresql/.s.PGSQL.5432
root@zaphod:/tmp# head /proc/904/sched
postgres (904, #threads: 1)
-------------------------------------------------------------------
se.exec_start : 346550579.786859
se.vruntime : 67740.577403
se.sum_exec_runtime : 14266.931943
se.nr_migrations : 11174
nr_switches : 69572
nr_voluntary_switches : 69407
nr_involuntary_switches : 165
se.load.weight : 1048576
root@zaphod:/tmp#so I check
heroot@zaphod:/tmp# dc
346550579 1000 / 60 / 60 / 24 / pq
4I obtain around 4 days on this calculator
If I check my uptime (my postgres example starts at boot) so it would be equal (4days)
root@zaphod:/tmp# w
21:46:44 up 4 days, 18 min, 4 users, load average: 0.29, 0.65, 0.72
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
francois tty7 :0 Thu21 4days 28:10 0.05s /bin/sh /etc/xdg/xfce4/xinitrc -- /etc/X11/xinit/xserverrc
francois pts/1 tmux(7598).%0 Thu21 2:55m 30.29s 30.20s irssi
francois pts/2 tmux(7598).%1 Fri21 2.00s 0.21s 11.34s tmux
francois pts/3 tmux(7598).%2 18:17 2:37m 0.13s 0.13s -bash
root@zaphod:/tmp#
root@zaphod:/tmp# #IT IS OKThat's OK. my example process is using is port since 4 days.
|
From the following netsta we can see that Air_metal application is using port 50070 and established
# netstat -anp | grep :50070 | grep ESTABLISHED
tcp6 0 0 100.14.74.11:48148 100.14.74.12:50070 ESTABLISHED 29455/Air_metalis it possible to check the time that Air_metal application start to using port 50070 and established
| is it possible to check the time that l application start to using port and established |
"the mapping of $DISPLAY on C to $DISPLAY on B" what does that mean?
Clearly you grep out of something on C, so you only see sockets on C which involves "port num=6010". Other connection or listening socket on C are grep out.
You didn't see any connection before because there hasn't been any X client running and connected to sshd(port number=6010), and more info after because you now run an X client, which has connected to your sshd(port num=6010).
You have to know the network topology when using SSH tunnel. SSH server on C opens a new socket which listen on port 6010 because it was asked to by the SSH client on B. The ssh tunnel is still established between SSH client on B and SSH server on C(port num=22, if sshd not specially configured), you don't see this tunnel connection since you grep it out. X clients on C connects to sshd(port number=6010), then sshd multiplex these connections using the ssh tunnel and forward these connections to the X server on B.
"Connection between $DISPLAY on C and $DISPLAY on B" doesn't really exist, the ssh tunnel is created between C:22 and address_of_the_SSH_client_on_B. And since it's a connection, it's not possible in the LISTENING state.
use netstat -ap without grep to see more information.
All the connection we mentioned in this answer means real TCP connection, from the kernel's view, not "connections" from end-users' view.
|
On machine B, I remote access machine C
$ ssh -X t@C
$ echo $DISPLAY
localhost:10.0How can I find/verify the mapping of $DISPLAY on C to $DISPLAY on B? Can it be done by the following command on C?
$ netstat -a | grep 6010
tcp 0 0 localhost:6010 0.0.0.0:* LISTEN
tcp6 0 0 ip6-localhost:6010 [::]:* LISTEN Why is the connection between $DISPLAY on C and $DISPLAY on B LISTEN not ESTABLISHED, given that the X forwarding channel has been created?
When I run a X client on C, how can I verify that it is connected to the X server on B (the local machine)? Why do I get more information about port 6010 in the following than before running the X client?
$ eog &
[1] 1129
$ netstat -a | grep 6010
tcp 0 0 localhost:6010 0.0.0.0:* LISTEN
tcp 0 0 localhost:59782 localhost:6010 TIME_WAIT
tcp 0 0 localhost:59780 localhost:6010 ESTABLISHED
tcp 0 0 localhost:59778 localhost:6010 TIME_WAIT
tcp 0 0 localhost:6010 localhost:59780 ESTABLISHED
tcp6 0 0 ip6-localhost:6010 [::]:* LISTENThanks.
| How can I find the mapping on `$DISPLAY` after `ssh -X`? |
No, the local address is always the end of the connection that was opened by the process being described. In this case, the MySQL server process listens on port 3306, so that’s its local address in any established connection. The queues are also specific to the connection direction described.
For an established connection, you should see the symmetric connection elsewhere in netstat or ss’s output.
TIME_WAIT connections are a special case. TIME_WAIT is used to ensure a new connection doesn’t receive stray packets; only the end of the connection which initiates its termination ever reaches that state (because the other end knows that its correspondent won’t send anything more). The connection is preserved by the operating system, so it’s no longer associated with a process; the local address is the end which closed the connection.
|
If I have the following netstat output:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 127.0.0.1:21950 ESTABLISHED 2129/mysqld
tcp 0 0 127.0.0.1:38766 127.0.0.1:10033 TIME_WAIT -If both ends of the socket are on the local machine, can the server and the client appear in either column?
| Are the 'Local Address' and 'Remote Address' netstat columns symmetrical for localhost? |
"So many of them?" It's using precisely one, albeit on both the IPv4 and IPv6 interfaces.
Any service needs to be listening (or have a service aggregator such as xinetd listen by proxy) to some port or socket in order for incoming connections to be accepted.
In /etc/services, you can see git's port, 9418:
git 9418/tcp # Git Version Control System |
I am studying my listening services, and I am thinking how identify the type of git listening services so I can kill git the right one in the right situation and/or both.
The services are needed for git push and git pull or git clone [repos], working also for a git server (DopeGhoti).
Code where I do not understand what each listening service is doing
masi@masi:~$ netstat -lt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:git *:* LISTEN
tcp6 0 0 [::]:git [::]:* LISTEN Doing netstat -plnt but how to determine which belongs to Git A or B service
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5348 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:17991 0.0.0.0:* LISTEN 24698/rsession
tcp 0 0 0.0.0.0:9418 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:34893 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp6 0 0 :::9418 :::* LISTEN -
tcp6 0 0 :::9999 :::* LISTEN -
tcp6 0 0 :::111 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 :::33875 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 ::1:5432 :::* LISTEN -
tcp6 0 0 ::1:25 :::* LISTEN - OS: Debian 8.7
Git: 2.1.4
| How to identify and kill git listening services here? |
After a while I have just returned to this issue, and finally solved it.
The problem is, that OpenWRT is pathcing the kernel source, and an extra option should be disabled, namely the CONFIG_PROC_STRIPPED. This is located in
(make) kernel_menuconfig -> File systems -> Pseudo filesystems -> [ ] Strip non-essential /proc functionality to reduce code size
The way it was figured out is looking at the patched version of kernel source, not the official one.
Thank you for all the effort you made!
|
I am trying to compile my own Linux for embededd devices, using the OpenWRT distribution. I am trying to get some Multicast information using the /proc/net/netstat interface but it is not found (normally this is available on my desktop).
If I am right this should be enabled in the kernel_menuconfig but I am not able to find any option related to this.
UPDATE: i was trying with kernel 3.10.49and 4.4.14. In both cases proc.c is compiled (proc.o is available in my build_dir, /proc is mounted, but /proc/net/netstat does not exists.
| /proc/net/netstat not found |
I would try
... | awk '$5 ~ /:80$/ { split($5,A,":") ; if ( !u[A[1]]++ ) print A[1] ;} 'which should filter on distant IP on port 80.no need to grep | awk | sed !!
$5 ~ /:80$/ filter fifth field ending in 80
!u[A[1]]++ is valid only once
split() will result in IP on A[1] (and port on A[2] ) (at least for pure IPV4 )to cope with watch like
while true
do
netstat -tn 2>/dev/null | awk '$5 ~ /:80$/ { split($5,A,":") ; if ( !u[A[1]]++ ) print A[1] ;} '
sleep 5
done |
I have had a few issues with a server recently. So i just wanted to leave a window showing the unique and IP's of connected devices.
I have been using:
watch -n 5 "netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | sed 's/.*::ffff://' | sort | uniq -c | sort -nr"Here is an example of the output when the formatting failsHere is a example of the Netstat without formattingreason for the confusion, is i am using awk '{print $5}' to print the 5th column only
I am assuming its because i am trying to use watch with pips and something does not agree with the other.
Can anyone suggest a tweak to the one liner, Or can anyone advise of another tool to monitor the active connections to the server (Not interested in local connections)
| Monitoring Server Connections - Netstat formatting issue |
If you do a netstat -a you get just the hostnames and names of services.
Example
$ netstat -a|head -20
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost.localdomain:ipp *:* LISTEN
tcp 0 0 *:db-lsp *:* LISTEN
tcp 0 0 *:58460 *:* LISTEN
tcp 0 0 *:17501 *:* LISTEN
tcp 0 0 *:lv-jc *:* LISTEN
tcp 0 0 *:ellpack *:* LISTEN
tcp 0 0 greeneggs.bubba.net:37050 stackoverflow.com:http TIME_WAIT
tcp 0 0 greeneggs.bubba.net:34320 stackoverflow.com:http ESTABLISHED
tcp 0 0 greeneggs.bubba.net:34223 stackoverflow.com:http ESTABLISHED There is nothing in this output that will match your IP address, since it's just names. If you want to forgo showing names and just show the numbers, use the -n switch to netstat:
$ netstat -an|head -20
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:58460 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:17501 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:2143 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:2025 0.0.0.0:* LISTEN
tcp 0 0 192.168.1.20:36188 198.252.206.16:80 ESTABLISHED
tcp 0 0 192.168.1.20:37112 198.252.206.25:80 TIME_WAIT
tcp 0 0 192.168.1.20:37116 198.252.206.25:80 TIME_WAIT |
My client is a MS Excel plug-in running on my laptop (client_machine1) that connects to a Solaris server (server1) to request some WebLogic Application running on port 28080.
bash-3.2$ set | grep SSH_CONNECTION
SSH_CONNECTION='<client_machine1 IP Address> 64134 <Server1 IP Address> 22'bash-3.2$ netstat -a | grep <client_machine1 IP address>
<Server1 FQDN>.28080 <client_machine1 IP address>.49592 260836 0 49950 0 FIN_WAIT_2
<Server1 FQDN>.28080 <client_machine1 IP address>.49595 261216 0 49950 0 FIN_WAIT_2
<Server1 FQDN>.28080 <client_machine1 IP address>.49596 261216 0 49950 0 FIN_WAIT_2
<Server1 FQDN>.ssh <client_machine1 IP address>.64134 65024 135 49950 0 ESTABLISHEDDuring my client session, I kept checking the output of netstat -a | grep <client_machine1 IP address> to see if my client's IP address shows up. Only for a brief period of time I was able to see it and capture the above output. For the rest of the time I can only see the ssh connection entry. I find this confusing as the client session is active at that time and data exchange is in progress.
Am I missing something here..? I also tried lsof | grep <client_machine1 IP address> but that doesn't return anything.
| Why doesn't my client IP address appear in the netstat output? |
It seems that Program name column's width is hard coded[0] in netstat to 20 characters. So it is not possible to widen it without modifying the source itself.
Like you, I use ps to get broader program name. Ways to list full program names are discussed here [1].
[0] https://sources.debian.org/src/net-tools/2.10-0.1/netstat.c/#L256
[1] netstat: See process name like in `ps aux`
|
There is an old post about line truncation in netstat (Netstat output line width limit) but my question is a bit different.
I'm using netstat (net-tools 2.10) on Debian 12. My primary use is to list listening ports, e.g. netstat -tunlpWee I find the PID/Program name column to been too narrow. Is there a way to widen this?
Option -T is unsupported. Option -W (--wide) does not help as this only affects IP addresses. Option -e is about "additional information," not "wider information."
At this point, I see my only option to be to wrap netstat in a script and leverage ps to get a broader "program name." Unless ... I'm missing something obvious.
UPDATE:
Thanks, davidt930. That's disappointing. I came up with this solution:
#!/usr/bin/env bash
# show applications using ports
# use sudo to get the process name
# The "PID/Program name" as returned by netstat(8) is too narrow for my tastes.
# Therefore, I wrap netstat's output in a series of calls to ps(1) to get
# broader application details, i.e. the full command line.PPWID=20
data=
while IFS= read -r ln ; do
[ -z "$data" ] && {
echo "$ln"
[ "${ln/PID\/Program name/}" != "$ln" ] && data=Y || :
continue
} || :
static="${ln:0:-$PPWID}"
program="${ln:0-$PPWID}"
[ "${program:0:1}" = "-" ] && command="(need privileges)" || {
pid=${program%%/*}
command=$(ps -o command -p $pid | tail -1)
}
echo "${static}${command}"
done< <(netstat -tunlpWee)It's a tad fragile as it relies on netstat keeping the PID/Program name column fixed at 20.
| Can you widen the columns in netstat, specifically "PID/Program name" ...? |
Your assumption is correct for the first service. It listen on localhost and locahost6. About the second it seems it listen on the IPs of the host, different from above. But there is probability second also try to listen on localhost. You can check it by stop both and start only second one.
And if you permit me humble recommendation: separate the ports for service one and service two, make them for example 3001 and 3002
|
I was surprised to discover that two servers I'm maintaining are both able to listen on port 3000 in development at the same time.
With the first server running, netstat shows
▶ sudo netstat -nap tcp | grep 3000
tcp6 0 0 ::1.3000 *.* LISTEN
tcp4 0 0 127.0.0.1.3000 *.* LISTEN And with both running:
▶ sudo netstat -nap tcp | grep 3000
tcp4 0 0 *.3000 *.* LISTEN
tcp6 0 0 ::1.3000 *.* LISTEN
tcp4 0 0 127.0.0.1.3000 *.* LISTEN My interpretation of this is that the first server has bound port 3000 only for localhost (127.0.0.1), and the second server has bound port 3000 for 'any' (0.0.0.0) address. Is that right?
The behaviour seems to be that the first server supersedes the other for traffic specifically to http://localhost:3000, which makes sense I suppose. I just wanted to confirm my understanding for this slightly surprising scenario, I would have thought that trying to listen to 'any' address would fail if any address with that port was already bound.
| Can two servers bind the same port? |
I would do it the other way round.
I assumeyou can connect to remote hosts,
and remote hosts is unix.just run
ss -tanp | awk '$5 == "18.23.292.9:8088"' on remote hosts.assuming also that no NAT is set |
with the following command I want to get which are the IP's that connected on my machine with port 8088
18.23.292.9 is machine that resource manager service is running on with port 8088
ss -tanp | grep 8088 | grep ESTAB
ESTAB 0 0 18.23.292.9:8088 118.2.291.2:52874 users:(("java",pid=13970,fd=829))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:56379 users:(("java",pid=13970,fd=668))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:52337 users:(("java",pid=13970,fd=666))
ESTAB 0 0 18.23.292.9:8088 118.2.280:34088 users:(("java",pid=13970,fd=790))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:59794 users:(("java",pid=13970,fd=660))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:59415 users:(("java",pid=13970,fd=665))
ESTAB 0 0 18.23.292.9:8088 118.2.279:53610 users:(("java",pid=13970,fd=750))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:63875 users:(("java",pid=13970,fd=661))
ESTAB 0 0 18.23.292.9:8088 110.6.52.2:50267 users:(("java",pid=13970,fd=667))now I want to know which are the application/services on remote machines are actually connected to port 8088
the reason is that we saw many connection to port 8088 and we want to know which are the process that try to connect
the machines are as below example 118.2.291.2 , 110.6.52.2 , etc
meanwhile I create without success the following script , that capture the IP and port of the machines that are connected
#!/bin/bashport=` netstat -anp | grep :8088 | grep ESTAB | head -1 | awk '{print $5}' | sed s'/:/ /g' | awk '{print $2}' ` ; IP=` netstat -nape | grep $port | awk '{print $5}' | sed s'/:/ /g' | awk '
{print $1}' `
export PORT=` netstat -nape | grep $port | awk '{print $5}' | sed s'/:/ /g' | awk '{print $2}' `echo $IP
echo $PORTmaybe other good example
here is a good example how find out which process is currently using a certain port in Linux. and also we get the list of machines that are connected ( on the right side )
lsof -i tcp:8088
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 13970 yarn 396u IPv4 1052681821 0t0 TCP *:radan-http (LISTEN)
java 13970 yarn 559u IPv4 1201044836 0t0 TCP master02.bigdata130.cgnt:radan-http->worker01.TATA130.cgnt:47506 (ESTABLISHED)
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED)
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED)
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED)
java 13970 yarn 634u IPv4 1201046323 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56272 (ESTABLISHED)
java 13970 yarn 635u IPv4 1201038518 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56270 (ESTABLISHED)
java 13970 yarn 664u IPv4 1201049689 0t0 TCP master02.TATA130.com:radan-http->kafka03.TATA130.com:39486 (ESTABLISHED)
java 13970 yarn 693u IPv4 1201050710 0t0 TCP master02.TATA130.com:radan-http->worker02.TATA130.com:39090 (ESTABLISHED)
java 18394 ambari 1511u IPv4 1201046322 0t0 TCP master02.TATA130.com:56258->master02.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1515u IPv4 1201049634 0t0 TCP master02.TATA130.com:56270->master02.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1516u IPv4 1201008383 0t0 TCP master02.TATA130.com:41112->master01.TATA130.com:radan-http (ESTABLISHED)
java 18394 ambari 1517u IPv4 1201038519 0t0 TCP master02.TATA130.com:56272->master02.TATA130.com:radan-http (ESTABLISHED)it will be very useful also if we know which is the user of the PID that used the port on target machines
for example
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED) PID=32424 user=root
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED) PID=324424 user=yarn
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED) PID=324224 user=yarnor maybe by this explain as
lets take the line
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED)so on master03 machine the port is 33736
so if we access to master03 machine and do
netstat -nlp | grep :33736tcp 0 0 0.0.0.0:33736 0.0.0.0:* LISTEN 13970/javaand
ps -ef | grep 13970 | grep -v grep | awk '{print $1}'
yarnso my question is - can we use the command lsof -i tcp:8088 , with pipe to other commands that gives us the expected results , or maybe other idea as script?
Expected results
java 13970 yarn 617u IPv4 1201044953 0t0 TCP master02.TATA130.com:radan-http->master03.TATA130.com:33736 (ESTABLISHED) PID=32424 user=root
java 13970 yarn 621u IPv4 1200925788 0t0 TCP master02.TATA130.com:radan-http->master01.TATA130.com:37762 (ESTABLISHED) PID=324424 user=yarn
java 13970 yarn 631u IPv4 1201038517 0t0 TCP master02.TATA130.com:radan-http->master02.TATA130.com:56258 (ESTABLISHED) PID=324224 user=yarn | how to know what the process that connected to my machine VIA specific port |
Each Screen session is its own “server”; these are the “SCREEN” processes, and they are the processes which continue running when you detach from a session. The “client” is a “screen” process which connects to the corresponding session and allows you to interact with it; these are short-lived (relatively speaking), and only last as long as they are attached to a session.
You can see all your current user’s running sessions with
screen -lsThis will show the process identifiers, tty and host of all the available sessions.
ps -fC screenwill show all the running screen processes, both sessions and clients.
|
Does GNU Screen have a server-client architecture model?
Is each Screen session a Screen client process?
Why can't I find out the Screen server process, but only its session processes i.e. client processes? (I suppose both Screen server and client processes contain a substring screen in their names up to cases)
$ sudo netstat -a | grep -i screen
[sudo] password for t:
unix 2 [ ACC ] STREAM LISTENING 2807736 /run/screen/S-testme/3341.testme
unix 2 [ ACC ] STREAM LISTENING 2809282 /run/screen/S-testme/3875.tm
unix 2 [ ACC ] STREAM LISTENING 4533106 /run/screen/S-t/27525.test$ ps aux | grep -i [s]creen
testme 3341 0.0 0.0 45416 2428 ? Ss Nov30 0:00 SCREEN -S testme
testme 3875 0.0 0.0 38860 2380 ? Ss Nov30 0:00 SCREEN -S tm
t 27525 0.0 0.0 45828 3740 ? Ss 07:22 0:00 SCREEN -S testHow can I find out the Screen server process?
Thanks.
| How can I find out the Screen server process? |
After further investigation through chat, the problem was identified with a specific crontab on OP's system. This was identified using the parent PID of the nc process, which showed the following connection:
nc -l -p 45454 -e /usr/sbin/link -> /bin/sh -c nc -l -p 45454 -e /usr/sbin/link -> /usr/sbin/CRON -f
The user account associated with the nc process was named 'link', and had a crontab associated with it. This crontab contained a cron job with the same nc command, scheduled to be run every minute. Since the nc command had been specified to listen indefinitely, new nc processes were being created every minute.
The issue was resolved by commenting out the specific entry in the crontab file.
|
I am struggling to close these ports which seem to be backdoors -- or at least I have never opened and used them. How can I close or shutdown nc and close these ports?
netstat -lntup | grep nc ps-ef | How to close nc port in Debian 9? |
The connections are probably DNATed to the containers. That means the host is now acting as a router between "outside" and the containers. netstat will not display those connections. You will need additional tools for the missing flows.
One such tool is conntrack, which queries conntrack about tracked connections. Using this command with option -j:
conntrack -L -jwill display only NATed connections, thus showing the current active established flows between the containers and outside and complementing the output of netstat.
If you want an output similar to netstat you could try if available netstat-nat which more or less relies on the same mechanism.
An other method, to run in a loop, would be to query Docker (using docker directly on the host) about each container's main pid and use the result to access the container's network, to run an usual netstat. This has the advantage of displaying certain states not showing anymore with conntrack (like CLOSE_WAIT, usually a symptom of problems on an application).
Given a running Docker container named containername, this should get all its network connections, as seen from its own point of view, even if the container itself lacks any useful command for this:
nsenter --target $(docker inspect --format '{{.State.Pid}}' containername) --net netstat -utn |
I'm not able to detect incoming connections from Nextcloud sync clients via netstat on my server.
I have a server in my LAN, running Nextcloud with MySQL in docker containers. I use multiple Nextcloud Clients (Linux, macOS and iOS), everything is working fine.
I want to check if clients are connected to my server on host level. With netstat I'm able to see if a client is connected via the Nextcloud web UI, but I don't recognize connections of the Nextcloud sync client.
Does anyone know the netstat parameter I'm missing? Any hint is welcome.
BR Stefan
| Monitor Nextcloud connections with netstat |
I did the following experiment to illustrate my comment above. I use the netcat command to implement two simple TCP servers. My secnario differs from yours a bit in that I explicitly bind to the public IP instead of *:8081
# Terminal 1
$ nc -kl 127.0.0.1 24482In a separate terminal:
# Terminal 2
$ nc -kl <public_ip> 24482From another terminal on the local host:
# Terminal 3
$ telnet localhost 24482
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
hi
^]
telnet> q
Connection closed.After that, I see hi in Terminal 1.
Next, from a remote node:
# Terminal 4 (on remote node)
$ telnet <public_ip> 24482
Trying <public_ip>...
Connected to <public_ip>.
Escape character is '^]'.
ho
^]
telnet> q
Connection closed.After that, I see ho in Terminal 2.
I suspect that this is the behavior that you would see, although I don't have a Solaris environment in which to test it.
|
How is it possible that netstat -a | grep 8081 shows this:
localhost.8081 *.* 0 0 49152 0 LISTEN
*.8081 *.* 0 0 49152 0 LISTENI don't really understand which means the second entry.
UPDATE_1: I've checked that two different processes are listening on 8081... I used to believe that this is not possible. One process is Jboss, which 8081 port is used to serve browser requests, and the other is Gitblit GO (It could have an embeded server in JAR), which 8081 port is used to shutdown.
| Two local address listening on same port? |
In IPv6 terminology, ::1 is the loopback address (e.g. 127.0.0.1 in IPv4 terminology).
It is essentially 0:0:0:0:0:0:0:1 (or more precisely, but uninterestingly, 0000:0000:0000:0000:0000:0000:0000:0001) with all the 0's collapsed down into ::1. It is functionally equivalent to the IPv4 127.0.0.1 and performs the same role.
So in the first output, the tcp6 line is listening on the IPv6 loopback address, not on all addresses, and hence is not visible externally.
The second example (:::5901) shows the unspecified IPv6 address ::, followed by an additional colon and the port number. This is functionally equivalent to the unspecified IPv4 address with port in 0.0.0.0:5901 and hence is open to the network over any IPv6 address.
|
If I ssh root@server -R 5901:localhost:5900 and netstat -an I get:
Active Internet connections (servers and established) │
Proto Recv-Q Send-Q Local Address Foreign Address State │
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN │
tcp6 0 0 ::1:5901 :::* LISTEN │Whereas if I allow GatewayPorts yes in my ssh_config and do the same I get
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN
tcp6 0 0 :::5901 :::* LISTEN(and my service is reachable from the outside network)
How do you read the format ::1:5901 (as opposed to :::5901)?
edit :
How do you read that one is not open to the public network?
0.0.0.0 means "all IP addresses on the local machine"
| reading the output of netstat for tcp |
Hint:(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)Try running netstat using sudo, i.e. sudo netstat ...
|
I have some open ports on my laptop but netstat is not reporting which PID/Program is associated with them:
$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8307 0.0.0.0:* LISTEN -
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13819 0.0.0.0:* LISTEN 32107/skype
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:902 0.0.0.0:* LISTEN -
tcp6 0 0 :::443 :::* LISTEN -
tcp6 0 0 :::902 :::* LISTEN - How can I figure out which process has these open? In particular 443.
| Open ports with no associated PIDs |
I understand the routing table is a "fall through" tableNot really. The routing table is ordered from "most specific route" to "least specific route". Your default route is via br0, and is defined as the route of last resort because there is no netmask (i.e. genmask is 0.0.0.0).because the 1st entry is 0.0.0.0 all traffic will go through the tun1 interfaceAlthough this is the correct conclusion, unfortunately it's the wrong reasoning. Here is your routing table ordered visually to represent the order used for routing (top is best match):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
10.182.1.1 10.182.1.5 255.255.255.255 UGH 0 0 0 tun1
10.182.1.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1
46.23.68.178 10.0.1.1 255.255.255.255 UGH 0 0 0 br010.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br0127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo0.0.0.0 10.182.1.5 128.0.0.0 UG 0 0 0 tun1
128.0.0.0 10.182.1.5 128.0.0.0 UG 0 0 0 tun10.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 br0The default route is still via br0. However, there are two more specific routes (the netmask is 128.0.0.0) each of which will match half the numerically available IPv4 address space, so these will match all non-local traffic.My goal is to only route 1 or 2 websites via the Website - specifically a1505.g2.akamai.net which according to nslookup maps to the following IPs: 195.59.150.43 and 195.59.150.26.I'm not sure what you mean by "the Website"; I'm going to assume that's tun1 and that you want to stop all your traffic going that way.
To do this with OpenVPN you simply remove the directive route-gateway def1 from its configuration file. (If you're using something like NetworkManager then there should be an option you need to untick that marks the connection as your default route.)
Having done this all you need to do is to add two routes, one for each host, via the gateway for tun1:
route add -host 195.59.150.43 gw 10.182.1.5
route add -host 195.59.150.26 gw 10.182.1.5These are host routes so the netmask is implicitly /32 (i.e 255.255.255.255), so they take precedence over everything and in my visually ordered table would be with the three entries at the top of the list.
Actually, you should be able to do this in the OpenVPN configuration file, too. This would allow the routes to be brought up and removed automatically with the VPN itself. Depending on your setup the configuration would either be in the server, where the routes would get "pushed" to the client, or as directives in the client's OpenVPN configuration file:
route 195.59.150.43
route 195.59.150.26 |
I have a DD-WRT router with an OpenVPN service configured. I'd like to send only certain source IP's over the vpn connection.
I believe my current routing table (as seen with
netstat -rn) sends all my traffic over the vpn on interface tun1. From what I understand the routing table is a "fall through" table so in this case because the 1st entry is 0.0.0.0 all traffic will go through the tun1 interface.
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.182.1.5 128.0.0.0 UG 0 0 0 tun1
0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 br0
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br0
10.182.1.1 10.182.1.5 255.255.255.255 UGH 0 0 0 tun1
10.182.1.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1
46.23.68.178 10.0.1.1 255.255.255.255 UGH 0 0 0 br0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
128.0.0.0 10.182.1.5 128.0.0.0 UG 0 0 0 tun1
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br0My goal is to only route 1 or 2 websites via the Website - specifically a1505.g2.akamai.net which according to NSLookup maps to the following IPsAddress: 195.59.150.43
Address: 195.59.150.26I believe what is required is 3 steps.
1) Issue some sort of iptables command to delete the 1st routing entry - which in doing so would stop sending traffic through the vpn tun1
2) Issue two commands to tell the destination 195.59.150.43 and 195.59.150.26 to route through tun1
However i find iptables to be rather confusing in all honesty.
Is this the correct approach and if so could somebody perhaps give me a sample command or two?
Thanks!
| Routing Configuration sending some traffic over VPN |
The file /etc/services only provides number to name mapping for ports, and has nothing to do with what ports ss lists. Additionally, the names in this file are common uses for the ports, not what the port is actually used for on your system, as any program (with sufficent permissions) can open any port for any purpose. Basically /etc/services is a subset of the IANA's assigned numbers registry.
What ss lists is what ports on your system are in use. If it is listed in ss, it is in use. If you run ss -p as root, it should list what process specifically has the port open.
You can't directly modify what ports are in use. What you need to do is use sudo ss -p to find out what program is using the port and kill it. Or, conversely, if a port is not listed in ss and you expect it to be, start the service that uses the port.
It is a common misguided perception that ports can be created and deleted. Ports are just a number, they always exist and can't be deleted or created. They are either in use or not in use.
|
When trying to see port clashes within my system, many websites online recommend using /etc/services or ss -tunl to see port info
I am noticing /etc/services is providing different information to -ss on most occasions.
Output comparison examples
sudo cat /etc/servicesftp 21/udp
ftp 21/sctp
ssh 22/tcp
ssh 22/udp
ssh 22/sctp
telnet 23/tcp
telnet 23/udp
smtp 25/tcpversus
ss -tunlNetid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:46670 0.0.0.0:*
udp UNCONN 0 0 [::]:5353 [::]:*
udp UNCONN 0 0 [::]:38838 [::]:* Is /etc/services a static data file and should only be used as a guide, not an true reflection of what the real port configuration of the system is.
Where does ss program gather this port data, and how can I modify/delete some of the ports, either through ss or another program?
| Where does ss command gather its data for ports etc |
This was solved by a patch release for the application server in use.
|
If I check how many connections serverA (192.168.1.1) has open to serverB (192.168.2.1), I get the following response:
[username@serverA ~] $ netstat -n | grep 192.168.2.1
tcp 0 0 192.168.1.1:51846 192.168.2.1:10001 ESTABLISHED
tcp 0 0 192.168.1.1:50872 192.168.2.1:10001 ESTABLISHED
tcp 0 0 192.168.1.1:51824 192.168.2.1:10001 ESTABLISHED
tcp 0 0 192.168.1.1:51848 192.168.2.1:10001 ESTABLISHED
[username@serverA ~] $ netstat -n | grep 10.79.165.145 | wc -l
4However, if I do the opposite and check on serverB how many connections it has open to serverA, I get this:
[username@serverB ~] $ netstat -n | grep 192.168.1.1
tcp 0 0 192.168.2.1:10001 192.168.1.1:51846 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:55122 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:59930 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:50352 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:44142 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:57698 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:38268 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:41822 ESTABLISHED
... many more connections ...
tcp 0 0 192.168.2.1:10001 192.168.1.1:43840 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:50870 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:34100 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:34620 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:41126 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:49298 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:50004 ESTABLISHED
tcp 0 0 192.168.2.1:10001 192.168.1.1:51408 ESTABLISHED
[username@serverB ~] $ netstat -n | grep 192.168.1.1 | wc -l
104I was not expecting there to be a mismatch in the number of the connections between the 2 servers, essentially serverB thinks there are a lot more connections open to serverA and than that serverA does to serverB.
These servers are on different VLANs and the connections do go through a firewall. Both serverA and serverB are RHEL v7 VMs running on ESXI. They don't run any containers or anything that would do NAT.
What could be responsible for the mismatch in open connection numbers
| Running netstat on 2 servers checking connections to the other one shows mismatch in number of connections |
enp2s0f0 - is your primary network interface. About the convention of naming network interfaces you can read on wiki.
The RX-OK/ERR/DRP/OVR columns give statistics about the packets that have been received by the interface. OK stands for packets correctly received.
TX-OK stands for packets correctly transmitted respectively.
Counters always increase for 'TX-OK', it means that outgoing traffic is bigger than incoming for your server.
|
I'm testing my linux server with the command netstat -i.I just want to know what enp2s0f0 means. Each time that I execute netstat -i, its RX-OK and TX-OK always increase. What does this mean?
| Why do RX-OK and TX-OK increase |
It seems to me, that problem is within your client, because Recv-Q for established connections is the count of bytes not copied by the user program connected to this socket. It means, your application does not read data from socket.
|
My networking skills are quite poor and I am trying to understand the following netstat -ton meaning.
We have redis server and a client with a thread connecting to it via PUB/SUB. The client SUBSCRIBE to a Redis channel.
This, I guess, creates a long lived TCP link between both and the server sends data to the client when something happens on the channel.
However, from time to time (a month to three months) the client stops receiving anything, yet does not crash or raise any errors.
In this state I see the following:
A stalled Recv Q with ESTABLISHED off.I read around a little bit and it could be related to many things including TCP keep alive parameters?
Any tips and ideas of how to debug such a state?
Restarting the client solves everything.
| Stalled TCP connection with data in RecvQ and ESTABLISHED off |
With sed:
sudo netstat -4tln | sed '1d;2d;s/[^:]*:\([0-9]\+\).*/\1/' | sort -n |
I want to use command line to show only the port numbers after ":" only
This is what I'm trying to do
sudo netstat -ant |grep LISTEN|grep :|sort -n|cut -c 45-It shouldn't list any tcp6 info
| how to show port numbers that are listening for the incoming connections under TCPv4? |
Credit goes to user OweH_OweH on reddit.systemd restarts it, because grafana-server.service has Restart=on-failure in it and by sending the process SIGKILL you trigger that failure state. If you just use the normal kill PROCID_HERE command it will terminate normally and not be restarted.So I shouldn't have being adding the -9 in my kill command.
|
I need port 3000 free for my dev work. But it's being used by grafana-server, that I don't remember ever installing. We do use it within our company so perhaps at some point it has been added to system somehow.Anyway, so I use this command sudo netstat -lepunt | grep 3000 to get the process that is on port 3000.
Then sudo kill -9 [process number] then I netstat again and it's there again but with a different process number.
I've also followed this example to force the port number used to a different one but it still doesn't seem to fix my problem.
I've also tried to find every occurrence of grafana on my computer and delete them. Restarted services to make the system know that they no longer exist, but they still do. At the end of my wit and not sure what to do next. Any ideas? Any more information needed from myself?
| Port 3000 is always being hogged by grafana-server |
It is possible to use the kernel’s iptables and nftables simultaneously, but it requires some attention. The order in which the rules are applied is determined by the hook priority; legacy iptables default is 0, so an nft hook can be set to priority -1 if it should apply before iptables, or 1 if it should apply afterwards.
Simultaneous NAT requires a kernel >= 4.18.
iptables-nft is designed to facilitate migration to nft. Installing that alongside nft will allow programs expecting the iptables/ip6tables interface to continue working, using nftables in the kernel.
This is the approach used in current containerised environments such as Kubernetes: the containers are supposed to detect which set of tables are used by the host, and use the corresponding iptables interface (feeding the legacy tables or nftables). See Kubernetes issue #71305 for details.
The main pain point comes from combining iptables-nft and iptables-legacy: they use the same priority, so packets go through both chains and end up nowhere.
See When and how to use chain priorities in nftables for details of nftables priorities.
|
The question is pretty much already in the title:
Can nftables and iptables/ip6tables rules be applied at the same time? If so: what's the order of precedence?
The reason I ask is this: plenty of tools - especially from the realm of containerization - still rely on iptables and ip6tables to add rules and make containerized services available or unavailable to other entities on the network. So if I want to express my standard firewall rules with nft this has to work in parallel with iptables/ip6tables.
Or is this catered by using iptables-legacy/ip6tables-legacy with update-alternatives or similar? I.e. all those containerization tools continue to use what they assume is iptables/ip6tables, but in reality it's the compatibility "layer" provided by nftables?
As for the order of precedence I'd appreciate a diagram of sorts, if available to show where rules have which precedence.
| Can nftables and iptables/ip6tables rules be applied at the same time? If so: what's the order of precedence? |
You're probably missing your table or chain.
nft list rulesetwill give you what you are working with. If it prints out nothing, you're missing both.
nft add table ip filter # create table
nft add chain ip filter INPUT { type filter hook input priority 0 \; } # create chainThen you should be able to add your rule to the chain.
NOTE: If you're logged in with ssh, your connection will be suspended.
|
I am trying to apply below nftables rule which I adopted from this guide:
nft add rule filter INPUT tcp flags != syn counter dropsomehow this is ending up with:Error: Could not process rule: No such file or directoryCan anyone spot what exactly I might be missing in this rule?
| nftables rule: No such file or directory error |
UPDATE: iptables-nft (rather than iptables-legacy) is using the nftables kernel API and in addition a compatibility layer to reuse xtables kernel modules (those described in iptables-extensions) when there's no native nftables translation available. It should be treated as nftables in most regards, except for this question that it has fixed priorities like the legacy version, so nftables' priorities still matter here.iptables (legacy) and nftables both rely on the same netfilter infrastructure, and use hooks at various places. it's explained there: Netfilter hooks, or there's this systemtap manpage, which documents a bit of the hook handling:PRIORITY is an integer priority giving the order in which the probe
point should be triggered relative to any other netfilter hook
functions which trigger on the same packet. Hook functions execute on
each packet in order from smallest priority number to largest priority
number. [...]or also this blog about netfilter: How to Filter Network Packets using Netfilter–Part 1 Netfilter Hooks (blog disappeared, using a Wayback Machine link instead.)
All this together tell that various modules/functionalities can register at each of the five possible hooks (for the IPv4 case), and in each hook they'll be called by order of the registered priority for this hook.
Those hooks are not only for iptables or nftables. There are various other users, like systemtap above, or even netfilter's own submodules. For example, with IPv4 when using NAT either with iptables or nftables, nf_conntrack_ipv4 will register in 4 hooks at various priorities for a total of 6 times. This module will in turn pull nf_defrag_ipv4 which registers at NF_INET_PRE_ROUTING/NF_IP_PRI_CONNTRACK_DEFRAG and NF_INET_LOCAL_OUT/NF_IP_PRI_CONNTRACK_DEFRAG.
So yes, the priority is relevant only within the same hook. But in this same hook there are several users, and they have already their predefined priority (with often but not always the same value reused across different hooks), so to interact correctly around them, a compatible priority has to be used.
For example, if rules have to be done early on non-defragmented packets, then later (as usual) with defragmented packets, just register two nftables chains in prerouting, one <= -401 (eg -450), the other between -399 and -201 (eg -300). The best iptables could do until recently was -300, ie it couldn't see fragmented packets whenever conntrack, thus early defragmentation was in use (since kernel 4.15 with option raw_before_defrag it will register at -450 instead, but can't do both, but iptables-nft doesn't appear to offer such choice).So now about the interactions between nftables and iptables: both can be used together, with the exception of NAT in older kernels where they both compete over netfilter's nat ressource: only one should register nat, unless using a kernel >= 4.18 as explained in the wiki. The examples nftables settings just ship with the same priorities as iptables with minor differences.
If both iptables and nftables are used together and one should be used before the other because there are interactions and order of effect needed, just sligthly lower or increase nftables' priority accordingly, since iptables' can't be changed.
For example in a mostly iptables setting, one can use nftables with a specific match feature not available in iptables to mark a packet, and then handle this mark in iptables, because it has support for a specific target (eg the fancy iptables LED target to blink a led) no available in nftables. Just register a sligthly lower priority value for the nftables hook to be sure it's done before. For an usual input filter rule, that would be for example -5 instead of 0. Then again, this value shouldn't be lower than -149 or it will execute before iptables' INPUT mangle chain which is perhaps not what is intended. That's the only other low value that would matter in the input case. For example there's no NF_IP_PRI_CONNTRACK threshold to consider, because conntrack doesn't register something at this priority in NF_INET_LOCAL_IN, neither does SELinux register something in this hook if something related to it did matter, so -225 has no special meaning here.
|
When configuring a chain in nftables, one has to provide a priority value. Almost all online examples set a piority of 0; sometimes, a value of 100 gets used with certain hooks (output, postrouting).
The nftables wiki has to say:The priority can be used to order the chains or to put them before or after some Netfilter internal operations. For example, a chain on the prerouting hook with the priority -300 will be placed before connection tracking operations.
For reference, here's the list of different priority used in iptables:NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation
NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation
NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations
NF_IP_PRI_CONNTRACK (-200): Connection tracking operations
NF_IP_PRI_MANGLE (-150): mangle operation
NF_IP_PRI_NAT_DST (-100): destination NAT
NF_IP_PRI_FILTER (0): filtering operation, the filter table
NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example
NF_IP_PRI_NAT_SRC (100): source NAT
NF_IP_PRI_SELINUX_LAST (225): SELinux at packet exit
NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exitThis states that the priority controls interaction with internal Netfilter operations, but only mentions the values used by iptables as examples.
In which cases is the priority relevant (i.e. has to be set to a value ≠ 0)? Only for multiple chains with same hook? What about combining nftables and iptables? Which internal Netfilter operations are relevant for determining the correct priority value?
| When and how to use chain priorities in nftables |
The ordering in the example will be undefined, but both chains will be traversed (unless for example the packet gets dropped in the first chain seen).
Netfilter and the Network/Routing stack provide the ordering
Here's the Packet flow in Netfilter and General Networking schematic:While it was made with iptables in mind, the overall behaviour is the same when applied to nftables with minor differences (eg: no separation between mangle and filter, it's all filter in nftables with the exception of mangle/OUTPUT which should probably be translated into type route hook output, or most of the bridge mingling between ebtables and iptables seen in the lower part UPDATE: doesn't exist with nftables exists but should be avoided by using nftables directly in the bridge family directly, and using kernel >= 5.3 if conntrack features are needed there (and by not using the kernel module br_netfilter at all).
Role of tables
A table in nftables is not equivalent to a table in iptables: it's something less rigid. In nftables, the table is a container to organise chains, set and other kinds of objets, and limit their scope. Contrary to iptables It's perfectly acceptable and sometimes required to mix different chain types (eg: nat, filter, route) in the same table: for example that's the only way they can access a common set since it's scoped to the table and not global (like would be iptables' companion ipset).
Then it's also perfectly acceptable to have multiple tables of the same family including again the same kind of chains, for specific handling or to handle specific traffic: there's no risk of altering rules in an other table when changing the contents of this table (though there's still the risk of having clashing effects as an overall result). It helps managing rules. For example the nftlb load-balancer creates tables (in various families) all named nftlb, intended to be managed only by itself and not clashing with other user-defined tables.
Ordering between hooks and within hooks
In a given family (netdev, bridge, arp, ip, ip6), chains registered to different hooks (ingress, prerouting, input, forward, output, postrouting) are ordered from the hook order provided by Netfilter as seen in the schematic above. Priority's scope is limited to the same hook and doesn't matter here. For example type filter hook prerouting priority 500 still happens before type filter hook forward priority -500 in the case of a forwarded packet.
Where applicable, for each possible hook of a given family, each chain will be competing with other chains registered at the same place. The tables play no role here, except defining the family. As long as the priority is different, within a given hook type, a packet will traverse chains within this hook from the lowest priority to the highest. If exactly the same priority is used for two chains of the same family and hook type, order becomes undefined. When creating chains, will the current kernel version add the chain before or after a chain with the same priority in the corresponding list structure? Will the next kernel version still keep the same behaviour or will some optimization change this order? It's not documented. Both hooks will still be called, but the order they are called in is undefined.
How could this matter? Here's as quote from man page below, just to clarify that a packet can be accepted (or not) multiple times in the same hook:accept
Terminate ruleset evaluation and accept the packet. The packet can
still be dropped later by another hook, for instance accept in the
forward hook still allows to drop the packet later in the postrouting
hook, or another forward base chain that has a higher priority number
and is evaluated afterwards in the processing pipeline.For example if one chain accepts a certain packet, and the other chain drops this same packet, the overall result will always be a drop. But one hook might have done additional actions leading to side effects: for example it could have added the packet's source address in a set and the other chain called next have dropped the packet. If the order is reversed and the packet is dropped first, this "side effect" action will not have happened and the set will not have been updated. So one should avoid using the exact same priority in this case. For other cases, mostly when no drop happens, this would not matter. One should avoid using the same priority unless knowing it won't matter.
Relation to other networking subsystems
Within a hook, all the integer range is available to choose the order, but some specific thresholds do matter.
From nftables' wiki, Here are the legacy iptables hook values valid for the ip family, which also include other subsystems:NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation
NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation
NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations
NF_IP_PRI_CONNTRACK (-200): Connection tracking operations
NF_IP_PRI_MANGLE (-150): mangle operation
NF_IP_PRI_NAT_DST (-100): destination NAT
NF_IP_PRI_FILTER (0): filtering operation, the filter table
NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example
NF_IP_PRI_NAT_SRC (100): source NAT
NF_IP_PRI_SELINUX_LAST (225): SELinux at packet exit
NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exitOf those only a few really matter: those not coming from iptables. For example (non-exhaustive) in the ip family:NF_IP_PRI_CONNTRACK_DEFRAG (-400): for a chain to ever see incoming IPv4 fragments, it should register in prerouting at a priority lower than -400. After this only reassembled packets are seen (and rules checking for the presence of fragments never match).
NF_IP_PRI_CONNTRACK (-200): for a chain to act before conntrack it should register in prerouting or in output at a prority lower than -200. Example, register at priority NF_IP_PRI_RAW (-300) (or any other value < -200 but still > -400 if one want to match the port in all cases) to add a notrack statement to prevent conntrack to create a connection entry for this packet. So the nftables equivalent of iptables' raw/PREROUTING is just filter prerouting with an adequate priority.Misc
Some special cases:the inet family registers within ip and ip6 families' hooks at the same time.type nat
It behaves differently when a rule matches and executes a NAT-related statement: the packet will not traverse further nat chains in the same hook. type nat registers differently. It has a fixed priority presence among other non-NAT hooks. For example in prerouting or output NAT hooks all happen at fixed priority NF_IP_PRI_NAT_DST = -100 with regard to filter hooks. Multiple NAT hooks still respect the priority order between themselves. Additional details about type nat can be seen in this Q/A: nftables: Are chains of multiple types all evaluated for a given hook?
NAT rules from both iptables-legacy and nftables shouldn't be mixed with a kernel before 4.18. Before 4.18, the first facility among iptables-legacy and nftables to register would handle all of NAT in the hook. |
I am moving from iptables to nftables. I have a basic questions about the packet processing order in nftables.
Since one can create multiple tables of same type, say inet, and also chains can be created inside each table with different or the same priority, what will be the processing order.
For example, if I create following, what will be the order.
table inet t1 {
chain INPUT {
type filter hook input priority 20; policy accept;
...
}
}table inet t2 {
chain INPUT {
type filter hook input priority 20; policy accept;
...
}
}while I understand that chains are hooked to different inputs, I yet to understand the logic behind having different tables.
Apology if is a stupid or basic question
| Packet processing order in nftables |
With a recent enough nftables, you can just write:
meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS"Actually, you can do even better:
set okports {
type inet_proto . inet_service
counter
elements = {
tcp . 22, # SSH
tcp . 53, # DNS (TCP)
udp . 53 # DNS (UDP)
}And then:
meta l4proto . th dport @okports acceptYou can also write domain instead of 53 if you prefer using port/service names (from /etc/services).
|
How can i do this in a single line?
tcp dport 53 counter accept comment "accept DNS"
udp dport 53 counter accept comment "accept DNS" | How to match both UDP and TCP for given ports in one line with nftables |
I received a response from the nftables developers after asking on their mail list. The short answer is that referencing sets in another table is not possible.
However, I was at least able to store my sets in a separate file and bring them in via an @include. This makes my ipsets more manageable instead of having to put them all into a single massive configuration file. The syntax is like so:
# nftables.conf
include "/etc/nftables.country-block"
table inet filter {
set country-block {
type ipv4_addr; flags interval;
elements = $country_block_list
}
}# nftables.country-block
define country_block_list = {
# comma-separated CIDR blocks here
}But it's worth noting that as of this writing (2016-12-21), this requires an nft command-line utility built from the latest source code of nftables, as the most current release available at this time (nftables v0.6) will throw an error with the above configuration. nftables has a pretty good wiki outlining how to build and install from source, although I don't expect that will be necessary a few months from now once a new version is released and makes its way out into all the various distros.
|
Use case: I have a home router using iptables today. I'm researching converting over to nftables, as it looks to be much more manageable for a lot of rules.
One thing I have setup today under iptables is a 'country-block' ipset which contains country CIDR blocks that covers the majority of random port probe/hack attempts. Unfortunately nftables can't use my existing ipsets directly, but it was fairly straightforward to convert it to an nftables ip set.
Problem: To avoid having one single massive nftables file, I chose to separate my 'country-block' set into a separate file. nftables makes it easy to include other files, so this seems to be well within the intended behavior for nftables. I've defined my country-block as so:
table ip country-block {
set country-block {
type ipv4_addr;
flags interval;
elements = { /* CIDR blocks here */ }
}
}This loads fine. Now I want to use it in my firewall filters. I have a table defined in my main config file 'table inet filter'. Here I want to add the rule:
ip saddr @country-block dropFollowing all my google searching for answers, this is the only way I've found for referencing ip sets. Unfortunately, this throws the error:
Error: Could not process rule: Set 'country-block' does not existI tried referencing "country-block@country-block" hoping it might resolve to the country-block namespace I created, but that doesn't work:
Error: syntax error, unexpected drop
ip saddr country-block@country-block drop
^^^^Does anyone know of a way to reference a set that is in a different table? I'd hate to have to collapse all of my sets into my single 'filter' table and maintain them all in a single file - what an ugly mess that would be.
ps. I tried to tag this 'nftables', but apparently it's a new tag and I don't have the rep required to create a new tag. Can some kind person with the required rep please tag this appropriately?
| nftables ip set multiple tables |
I think the answer is fairly straightforward. First, you have done exactly the right thing...
Firewalld is a pure frontend. It's not an independent firewall by itself. It only operates by taking instructions, then turning them into nftables rules (formerly iptables), and the nftables rules ARE the firewall. So you have a choice between running "firewalld using nftables" and running "nftables only". Nftables in turn works directly as part of the kernel, using a number of modules there, which are partly new, and partly repeat the "netfilter" system of kernel hooks and modules which became part of the kernel around 2000.
It gets quite confusing to run firewalld and nftables (formerly, iptables) in parallel, though I believe some people do so. If you were accustomed to run your own iptables rules anyway, it is the perfect solution to have converted them to nftables rules, and let them be the rules of your firewall. The best thing indeed is to completely disable and preferably mask firewalld - to be slightly pedantic, you can run:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl mask --now firewalldThere is nothing else you need to do. I myself run directly with nftables too. I find that much more transparent than using a front end (there are others than firewalld of course) - it gives you a complete understanding of what you are doing, and you can easily get a complete review of the effect of your rules by running sudo nft list ruleset > /etc/nftables.conf . And the use of separate nft tables in /etc/nftables.d is a nice and easy way of tracking what you have done, and where things are...
It was subsequently asked in comments by @eriknelson why you would mask a service at all. This is done, afaik, for practical reasons of user experience, and to protect against mistakes and hard-to-find bugs. It is highly undesirable to have more than one firewall system running, as the results for most people would be unpredictable, and it is unlikely that you get clear error messages from any firewall about its interaction with another firewall that is not expected to be there. The kernel however tries to process whatever it is given. If you use either nftables or iptables, you should not use firewalld. Or ufw. Or any other higher level system. And if you mainly use a higher-level firewall like firewalld, you don't like to mess around with the detailed low-level instructions (although occasionally it is done for a particular difficult situation that you find it too difficult to specify in the more high-level firewall).
When you mask a systemd service, you can neither start nor enable it straightaway. If you find / realise that the service is masked, you can unmask it - and then do what you want. This is set up to prevent any such changes made inadvertently or automatically. You can see for yourself if you have masked services on your computer by running sudo systemctl list-unit-files | grep mask
So this situation that you may not necessarily want to remove firewalld completely, but equally don't want to run it perhaps inadvertently, is precisely one of the cases where using sudo systemctl mask xyz.service can come in handy.
I suppose from what you write that you know all this. But I am a bit of an evangelist for nftables, and if others read this answer, they might be helped by these small hints. The documentation of nftables is good, but not excessive.
|
I've been on CentOS 7 for a long time and was used to building my custom iptables configurations on a variety of both personal and business boxes.
I've recently started working with CentOS 8 and learned of the move from iptables to nftables and so I was able to rewrite my rulesets and got everything up and running. The problem was that my custom nft rulesets were not persisting after a reboot, I had to manually systemctl restart nftables to get my rules back into force.
I learned that the culprit was firewalld, which from my understanding (because I never used it in CentOS 7), is a front end management tool for both iptables and nftables... correct? Once I systemctl disable firewalld and tried a reboot, my nftables rulesets were in place as expected. Problem solved.
My question is, what are the repercussions of not using firewalld, nftables is still running and active, so I'm assuming that my actual firewall is still in place, is there any reason why I should leave firewalld running and instead adjust a setting to ensure it's using my nftables ruleset instead. Any clarity on it's use would be greatly appreciated!
| CentOS 8 firewalld + nftables or just nftables |
A variant of this problem was addressed recently in Kubernetes, so it’s worth looking at what was done there. (The variant is whether to use iptables-legacy or iptables-nft and their IPv6 variants to drive the host’s rules.)
The approach taken in Kubernetes is to look at the number of lines output by the respective “save” commands, iptables-legacy-save and iptables-nft-save (and their IPv6 variants). If the former produces ten lines or more of output, or produces more output than the latter, then it’s assumed that iptables-legacy should be used; otherwise, that iptables-nft should be used.
In your case, the decision tree could be as follows:if iptables isn’t installed, use nft;
if nft isn’t installed, use iptables;
if iptables-save doesn’t produce any rule-defining output, use nft;
if nft list tables and nft list ruleset don’t produce any output, use iptables.If iptables-save and nft list ... both produce output, and iptables isn’t iptables-nft, I’m not sure an automated process can decide.
|
Given a host that is in an unknown state of configuration, I would like to know if there is an effective way of non-interactively determining if the firewall rule set in place is managed by iptables or nftables.
Sounds pretty simple and I've given this quite a bit of thought, but haven't come back with a meaningful answer to put on a script...
| Check whether iptables or nftables are in use |
There are still some errors lurking in the nftables wiki. The actual syntax is quite logical:to remove everything
nft flush rulesetto empty a table (with ip as family by default if not specified). Eg for my table
nft flush table mytableto delete a table (which also empties it first). Eg for mytable
nft delete table mytableto empty a chain (ditto). Eg for mytable mychain
nft flush chain mytable mychainto delete a chain (ditto). Eg for mytable mychain
nft delete chain mytable mychainto delete a rule (this can still be done only by the handle reference). Eg for tcp dport 5550 accept # handle 18
nft delete rule mytable mychain handle 18The thing to remember is what the action is done unto. If you want to do an operation at the chain level, then it's normal there's the chain keyword.
In case of doubt, the nft manpage is usually more accurate, but of course one has to know in advance the information is in the CHAINS section rather than the RULES section:CHAINS
{add | create} chain [family] table chain [ { type type hook hook [device device] priority priority ; [policy policy ;] } ]
{delete | list | flush} chain [family] table chain
delete chain [family] table handle handle
rename chain [family] table chain newname
[...]
flush Flush all rules of the specified chain. |
I have a number of rules in table mytable chain mychain:
> sudo nft -a list table mytable
table ip mytable { # handle 8
chain mychain { # handle 1
type filter hook input priority filter; policy accept;
tcp dport 5550 accept # handle 18
tcp dport 5551 accept # handle 19
tcp dport 5552 accept # handle 20
tcp dport 5553 accept # handle 21
tcp dport 5554 accept # handle 22
}
}According to nftables wiki it should be possible to remove all rules from the specified chain.
However the following command returns error:
> sudo nft delete rule mytable mychain
Error: syntax error, unexpected newline, expecting handle
delete rule mytable mychain
^What is the proper command to remove all rules from mychain without iterating over rule handles?
| nftables remove all rules in chain |
My view is that iptables, ip6tables, ebtables and arptable is a frontend tool-set to Netfilter.
They are a user-space tool-set that format and compile the rules to load them in the core Netfilter that runs in the kernel. You can find all the kernel parts of Netfilter in your modules directory ls /lib/modules/$(uname -r)/kernel/net/netfilter/ they have the form of yours nf_*, nft_*, xt_* kernel modules.
The problems with these tools is that they operate with rule granularity, so every adjust of rules implies to download ALL the kernel rules, make your modification on a binary blob, then upload it back to the kernel. This process become very intensive in CPU load when the rules becomes too numerous.
nftables is a rewrite of this tool series inside one unique tool (to rule them all ... ahemm), which make it simpler to use and more performant, but it is still a frontend to Netfilter, however the main differences is that it has a smooth syntax that is able to address every part of Netfilter, and it has the ability to modify the binary set of rules as a whole, directly inside Netfilter without having to download then and upload them one by one, which represent a big gain in performance.
This explain also that you can use both iptables and nftables to modify the rules but it is not recommended because you can't see precedence between different rules, or this precedence may not be what you wanted.
Now, depending on distribution policy one can find different set of packages to works with the new Netfilter core kernel set modules.
you mentioned xtables-nft : it is just a shortcut to designate an intermediary set (or package) of userspace tools made of {ip|ip6,eb,arp}tables with the ability to work on the new Netfilter core, the same way the olders tools were used to, so it helps and ease the migration from the old way to nftables (the new way).
There is also a package named iptables-legacy to keep ip{,6}tables set of traditional tools working the same way with the new Netfilter core without the ability to translate the rules directly to nftables, so Firewall scripting tools like ferm can keep working on new installation of modern kernel.
bridging the two, one can for example iptables-legacy-save |iptables-nft-restore to directly translate an old set of iptables rules to a new nftables ruleset.
xtables-nft is just a shortcut to designate an intermediary set (or package) of userspace tools made of {ip,eb,arp}tables with the ability to work on the new Netfilter core, the same way the olders tools were used to, so it helps and ease the migration from the old way to nftables (the new way)
regards.
|
I have been reading for a while now.
What I understood is:nftables is the modern Linux kernel packet classification framework. nftables is the successor to iptables. It replaces the existing iptables, ip6tables, arptables, and ebtables framework.
x_tables is the name of the kernel module carrying the shared code portion used by iptables, ip6tables, arptables and ebtables thus, Xtables is more or less used to refer to the entire firewall (v4, v6, arp, and eb) architecture. As a system admin, I should not worry about xtables / x_tables (some people use the underscore, so not sure whether xtables is same as x_tables or not) which is actually some code in the kernel.
nftables uses nf_tables, where nf_tables is the name of the kernel module. As a system admin, I should not worry about nf_tables which is actually some code in the kernel.
iptables-nft is something that looks like iptables but acts like nftables. Its whole purpose is to migrate from iptables to nftables.
iptables-nft uses xtables-nft, where xtables-nft is the name of the kernel module. As a system admin, I should not worry about xtables-nft.Please let my know whether the above statements are right or wrong. If wrong then please give me the correct statement.
| What is the relationship or difference among iptables, xtables, iptables-nft, xtables-nft, nf_tables, nftables |
First declare an empty table. If the table already existed, it doesn't throw an error nor alter its content: nothing happens. If it didn't exist, the empty table was just created. Now that it exists in all cases, it can be deleted. All this can be done in the same ruleset.
So, declare the table without chain nor rules, then delete it (the man page tells that flushing it will flush chains and rules, but this will not remove the chains themselves, they'll just be emptied, which will leave old renamed chains or sets in place or clash with them if their properties were changed. The nftables wiki has more informations about delete and flush behaviours.). Now you can really create it, still in the same and unique ruleset file. The same ruleset can now idempotently be loaded multiple times without throwing an error even the first time.
#!/usr/sbin/nft -ftable ip my_table
delete table ip my_tabletable ip my_table {
chain output {
type filter hook output priority 0; policy accept;
ip daddr 8.8.8.8 counter
# [...]
}
}UPDATE: recent enough kernels and nftables also accept the keyword destroy which means "delete if it exists" which could replace the two first commands ([add] + delete) for the same idempotent effect.
You could choose to use an include statement to put all such preparatory lines in a separate file in case there are many and you don't want them to pollute the ruleset.
You can do the same at the chain level, ie without altering other chains in the same table, nor supposing or requiring the table and its chains were here before. Here's an example to have reject_chain activate nftrace, which won't remove other tables nor my_table's chains. The example has no real usefulness, it's just to give an example.
#!/usr/sbin/nft -ftable ip my_table {
chain reject_chain {
}
}
delete chain ip my_table reject_chaintable ip my_table {
chain reject_chain {
nftrace set 1 counter reject
}
}Compatibility note: kernels < 3.18 would require both flush + delete to work properly, as explained in the wiki. This (and the equivalent for chains) would even work on any kernel version:
table ip my_table
flush table ip my_table
delete table ip my_table |
I have created a net-filter table. I have it in a script. I can not get this script to always load. If I flush/delete the table, then it does not work, if the table does not exist. If I do not flush/delete then it merges the old and new rules.
How do I flush/delete if the table exists?
#!/usr/sbin/nft -fflush table my_tabletable ip my_table {
chain output {
type filter hook output priority 0; policy accept;
ip daddr 8.8.8.8 counter
ip daddr 1.1.1.1 counter
skuid "other" jump restrict_chain
skuid "d" jump d_chain
} chain accept_chain {
nftrace set 1 counter accept
} chain reject_chain {
nftrace set 1 counter reject
} chain restrict_chain {
#type filter priority 0; policy drop;
counter
ip daddr 1.1.1.1 counter
oifname "lo" jump accept_chain
oifname != "lo" jump reject_chain
} chain d_chain {
counter
}
} | nftables: flush/delete when changing or creating new table |
For the question per se, these are the last two questions from the original post:How can I reliably use nft without iptables rules interference?
Or should I simply use iptables and remove nft?this is what the nftables wiki says:
What happens when you mix Iptables and Nftables?
How do they interact?nft Empty Accept Accept Block Blank
iptables Empty Empty Block Accept Accept
Results Pass Pass Unreachable Unreachable Pass So one should not worry that some traffic will be allowed because it was allowed in one tool, while forbidden in the other.
As for those iptables rules, as I asked, "after a system reboot iptables chains have some rules, which I didn't set (and I have no idea where they come from)", they turned out to come from the libvirtd.service, which I disabled, since I don't need it. But it wouldn't have hurt even if I had not.
|
I'm trying to set up a firewall on my own desktop (currently I'm tinkering with a Fedora 29 virtual machine). I would like to have it on the "deny-everything-by-default" basis. Almost immediately I decided to disable and mask the firewalld.service, since firewalld had no way to drop the outgoing packets, except by using the native iptables syntax. So I decided to resort to nftables, since it's the modern replacement for the former.
The problem is that after a system reboot iptables chains have some rules, which I didn't set (and I have no idea where they come from). On the other hand # nft list ruleset returns nothing. So I assume, that rules from iptables and nft will be enabled simultaneously and when I set up some nft rules, rules from iptables, which can appear from "nowhere", will be able to meddle.
I tried to remove iptables, but dnf refused to do so and warned that systemd depends on it.
So could anyone answer a couple of my questions here, please?Do I understand the concepts here correctly (that iptables rules and chains are separate from nft ones, and that they both are in effect at the same time)?
How can I reliably use nft without iptables rules interference?
Or should I simply use iptables and remove nft? | How to prevent iptables and nftables rules from running simultaneously? |
While the problem might appear simple, it's nothing but. Having Docker around always imposes several challenges to other parts in the system dealing with networking. Once nftables gets more widely adopted and gets directly used by Docker in the future, and especially once Docker stops using br_netfilter, things might become simpler.
If you think it's still worth using nftables along Docker, I present below a method intended to let Docker handle its part and not requiring having to duplicate Docker settings in other firewall rules whenever changes as simple as starting a new container with a new exposed port, are done.
Problems to be solved
iptables is still needed
Currently (2021) Docker still uses iptables and only iptables (It could also use firewalld but only with firewalld with an iptables backend. I'm not considering this case anyway). There's thus currently no way to have a pure nftables system when using Docker. The fact that iptables can be iptables-legacy or iptables-nft doesn't really matter.
Here are a few relevant excerpts from Docker and iptables that are useful for this case:Docker installs two custom iptables chains named DOCKER-USER and
DOCKER, and it ensures that incoming packets are always checked by
these two chains first.All of Docker’s iptables rules are added to the DOCKER chain. Do not
manipulate this chain manually. If you need to add rules which load before
Docker’s rules, add them to the DOCKER-USER chain. These rules are
applied before any rules Docker creates automatically.Nitpicking: actually Docker does -A DOCKER-USER -j RETURN so rules should be added in it before starting docker, or better: inserted which works in all cases.Rules added to the FORWARD chain -- either manually, or by another
iptables-based firewall -- are evaluated after these chains.Docker also sets the policy for the FORWARD chain to DROP. If your
Docker host also acts as a router, this will result in that router not
forwarding any traffic anymore.Docker enables IP forwarding but firewalls it for other uses than itself by default.It is possible to set the iptables key to false in the Docker engine’s
configuration file at /etc/docker/daemon.json, but this option is not
appropriate for most users. It is not possible to completely prevent
Docker from creating iptables rules, and creating them after-the-fact
is extremely involved and beyond the scope of these instructions.
Setting iptables to false will more than likely break container
networking for the Docker engine.Can't avoid having iptables.
br_netfilter
Moreover, Docker also loads the kernel module br_netfilter in order to have this property set:
# sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1So bridged frames (with here IPv4 type frames temporarily converted into IPv4 packets) are filtered by iptables and also by nftables (even if this is not clearly documented, nftables just like iptables hooks into Netfilter and Netfilter will call these hooks, be they from iptables or nftables).
This feature is the major point causing problems when interacting with Docker. Without knowledge about it, one would wonder why containers in the same internal bridged LAN can't communicate between themselves anymore, be they handled by Docker or something else (LXC, libvirt/QEMU...) running along Docker.
Here's the Packet flow in Netfilter and General Networking:A single chain from iptables or nftables in the ip/inet families can thus be traversed in two different ways: from usual routing path (green boxes inside green Network Layer field) but also from bridge path (green boxes inside blue Link Layer field). This documentation also tells:A bridged packet never enters any network code above layer 1 (Link
Layer). So, a bridged IP packet/frame will never enter the IP code.So there's a guarantee that a packet won't traverse twice the same chain which is a relief.
Interactions between iptables and nftables
Since the goal is to use nftables one has to know how to use them together.
Here are Q/A having an answer of mine about this:Packet processing order in nftablesWhen and how to use chain priorities in nftablesTo summarize:iptables and nftables can be used together
nftables can have its priority adjusted to have a deterministic order of evaluation between iptables and nftables (for this case: nftables after iptables)
a dropped packet stays definitively dropped whenever/wherever this happens
an accepted packet (which will be by iptables) continues evaluation in the next chain in the same hook (which will be nftables' chain).
packet marks can be used to convey messages between iptables and nftablesMethod to solve this in a generic way
Dealing with bridge path
nftables rules in the ip/inet families should avoid doing anything in bridge path. Without Docker activating br_netfilter this would never even have to be considered. Detecting being in the bridge path from the ip/inet family should be left to iptables to avoid having nftables to deal with this and stay generic, with Docker installed or not. It's also easier to do this with iptables than with nftables from the ip/inet family because there's the specific iptables -m physdev --physdev-is-bridged test:[!] --physdev-is-bridged
Matches if the packet is being bridged and therefore is not being routed. This is only useful in the FORWARD and POSTROUTING chains.Note that this match depends on and loads br_netfilter if this wasn't done by Docker already: to work around issues caused by br_netfilter, br_netfilter is needed!
Using marks to link iptables and nftables
The idea is to use a mark to have messages from iptables passed to nftables, to differentiate cases:rules evaluation is happening in the bridge path instead of the routing path
Always accept such case.packet was ACCEPT-ed by Docker
Further restrictions can be added but mostly accept such case.packet was ignored by Docker
Use normal nftables rules that don't have to consider the presence of Docker.packet was DROP-ed for any reason in iptables
That's a moot case, nftables won't see this packet and nothing has to or can be done about it.iptables
If done before Docker is started, create the filter chain DOCKER-USER:
iptables -N DOCKER-USERIf done after, Docker will have created it.
Add a rule to mark packets before Docker evaluation in the DOCKER chain overriden by the bridge path detection case with a different mark (inserting them here as explained before, but numbering them to preserve natural order, which does matter here):
iptables -I DOCKER-USER 1 -j MARK --set-mark 0xd0cca5e
iptables -I DOCKER-USER 2 -m physdev --physdev-is-bridged -j MARK --set-mark 0x10ca10x10ca1 and 0xd0cca5e are arbitrarily chosen values.
Append (before or after Docker was run, effect is the same since Docker always inserts its DOCKER chain before) a final rule that resets the packet's mark only if it was the tentative Docker evaluation mark, and add a final ACCEPT rule to override Docker's default DROP policy set on the FORWARD chain: the idea is to defer further evaluation to nftables for packets unrelated to Docker.
iptables -A FORWARD -m mark --mark 0xd0cca5e -j MARK --set-mark 0
iptables -A FORWARD -j ACCEPTnftables
Change the inet filter forward priority value to a value slightly greater than NF_IP_PRI_FILTER (0), for example 10 to ensure nftables's forward chain happens after iptables filter/FORWARD in order to respect this chronology. The base chain line in OP's ruleset should be changed from: chain forward {
type filter hook forward priority 0; policy drop;to:
chain forward {
type filter hook forward priority 10; policy drop;The 4 previous described cases can be detected in nftables by checking the mark on the packet. Adding counter expressions to help debug.mark 0x10ca1 : bridge path
Add bridge path pass-through rule:
nft add rule inet filter forward meta mark 0x10ca1 counter acceptmark 0xd0cca5e: Docker casecreate a regular/user chain to treat the Docker case and add a rule calling it:
nft add chain inet filter dockercase
nft add rule inet filter forward meta mark 0xd0cca5e counter jump dockercaseadd additional restrictions about Docker, but accept by default
For example to restrict incoming packets arriving from the eno2 interface to only be accepted if from private address within 192.168.0.0/16:
nft add rule inet filter dockercase iif eno2 ip saddr != 192.168.0.0/16 counter drop
nft add rule inet filter dockercase counter acceptno mark: general case not related to Docker
Add anything that would be done without having to consider the presence of Docker, including nothing and having a default drop policy, else probably starting with the usual ct state related,established accept(no packet: dropped in iptables, non-case)example above becomes:
...
chain forward {
type filter hook forward priority 10; policy drop;
meta mark 0x10ca1 counter accept
meta mark 0xd0cca5e counter jump dockercase
} chain dockercase {
iif eno2 ip saddr != 192.168.0.0/16 counter drop
counter accept
}
...Generic handling achieved
The ports 80 and 6200 don't have to appear in the nftables rules anymore. Should a new container that needs to expose new ports be added using Docker commands, nothing at all has to be done in nftables: it's already being taken care of thanks to the marks.
Adding more chains
Still because of br_netfilter's effects, should any other base nftables chain with the property hook forward or hook postrouting contain dropping rules or more usefully, altering rules (nat...) without using tricks described in previous link below figure 7b, then the same kind of arrangement has to be done:its priority value should be above the iptables' equivalent chain prioritysuch iptables equivalent chain (except filter/FORWARD where it's already done in DOCKER-USER) should receive:
iptables -t foo -I BAR -m physdev --physdev-is-bridged -j MARK --set-mark 0x10ca1
with foo among raw, mangle, or nat and BAR among PREROUTING or POSTROUTING depending on the caseand the very first rule of the nftables chain should be again:
meta mark 0x10ca1 acceptand if the chain's policy is again drop it should probably again include an user/regular chain jump from a rule using the 0xd0cca5e mark, as previously done.For hook prerouting, documentation about --physdev-is-bridged tells this might not work in PREROUTING: don't ever use a default drop policy there. Anyway for hook prerouting cases there can't either be any 0xd0cca5e mark inherited from filter/FORWARD yet, but the same would be true using only iptables: PREROUTING can't foresee what happens later.
If you really want to do something at the bridge level, just use nftables in the bridge family, don't rely on this special case of the ip/inet family called from bridge path because of br_netfilter.
Caveat
Now marks are used to handle this, it becomes more difficult to use marks simultaneously for something else, but not impossible with some care. For example by using bitwise operations and masks with these marks. This is available in iptables and nftables. Even ip rule accepts a mask when using a mark as selector.Important additional required adjustments
Docker adds nat rules to do port forwarding with iptables' DNAT target. In the end all exposed/published ports are routed to the containers instead of being received by the host. That means they will use as seen above iptables's filter/FORWARD chain as well as (with OP's ruleset) nftables inet filter forward chain and won't use INPUT / input.
There are also missing rules preventing correct connectivity for the host.
inet filter input
The input path won't be used at all for Docker's containers, except maybe for the docker-proxy case which usually is for local host's access but that OP already accepts with iif lo accept, so it doesn't have to be further handled in this answer. Anything about Docker shouldn't be present here: references to container's ports 80 and 6200 become useless and should be removed.
Then, unrelated to Docker, the input chain misses a stateful rule. Without it, return traffic from host's output (DNS query replies, ping replies, download for upgrades...) will fail. Use this:
chain input {
type filter hook input priority 0; policy drop;
ct state related,established accept
iif lo accept
iif eno2 icmp type echo-request accept
iif eno2 ip 192.168.0.0/16 tcp dport 22 accept
iif eno2 ip 192.168.0.0/16 tcp dport 443 accept
}The input path can still require additional rules for Docker itself (rather than its containers): rules might be needed to allow remote access to the Docker API (if security considerations allow it) or various features like VxLAN used by Docker swarm.
inet filter output
Likewise, OP's inet filter output chain's drop policy kills host connectivity (DNS queries, ping requests or downloads can't be initiated etc.). There should either be a policy accept, or the exceptions for needed outgoing traffic from host itself should be added. The chain should include at least something like this:
chain output {
type filter hook output priority 0; policy drop;
ct state related,established accept
oif lo accept
udp dport { 53, 123 } accept
tcp dport { 53, 80, 443 } accept
icmp type echo-request accept
}The containers' packets are not evaluated by these chains, but by the forward chain and won't be restricted.
IPv6
Using the inet family without enabling correctly ICMPv6 prevents any IPv6 connectivity, because IPv6 doesn't rely on (almost never-firewalled) ARP but on ICMPv6 for link local connectivity. Either use the ip family (and use an other name than filter for the table to avoid any clash with iptables-nft) or deal correctly with ICMPv6: accept them all or check which are required in input and in output direction for correct SLAAC (NDP: RS, RA, NS, NA, ...), ping ... handling.
|
I have two docker containers running on my machine where a very restrictive nftables configuration is active. I'd like to keep it that way but whitelist access to the docker containers from outside.
The containers open ports 80 and 6200. The docker service is started with iptables disabled.
Below is the current firewall configuration, including my attempt. icmp, ssh, http and https are already open. For docker, only the http port 80 and the application specific port 6200 are needed. I tried to allow access to docker only from 192.168.0.0/16 to be as restrictive as possible.
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
iif lo accept
iif eno2 icmp type echo-request accept
iif eno2 ip 192.168.0.0/16 tcp dport 22 accept
iif eno2 ip 192.168.0.0/16 tcp dport { http, https, 6200 } accept
} chain forward {
type filter hook forward priority 0; policy drop;
} chain output {
type filter hook output priority 0; policy drop;
}
}I tried adding additional rules for thr docker0 interface, but without any success. I suspect I have to modify chain forward?
| nftables whitelisting docker |
You can use nftrace to trace packet flows. It's very verbose but doesn't go to kernel logs but instead is distributed over multicast netlink socket (ie if nothing listens to them, traces just go to "/dev/null").
If you really want to trace everything, trace from prerouting and output at a low priority. Better use a separate table, because what you are displaying with nft list ip table filter is actually iptables-over-nftables with the compatibility xt match layer API and shouldn't be tampered with (but can safely be used along traces). Also you should know there are probably other tables for iptables, like the nat table.
So, with a ruleset from the file traceall.nft loaded with nft -f traceall.nft:
table ip traceall
delete table ip tracealltable ip traceall {
chain prerouting {
type filter hook prerouting priority -350; policy accept;
meta nftrace set 1
} chain output {
type filter hook output priority -350; policy accept;
meta nftrace set 1
}
}You can now follow these (very verbose) IPv4 traces with:
nft monitor traceThis would even work the same if doing this inside a container (which is usually not the case for log targets).
You can activate these traces elsewhere, or put conditions before activating them in a rule in a later priority to avoid tracing all hooks/chains. Following this schematic will help understand the order of events: Packet flow in Netfilter and General Networking.
If choosing to use the equivalent -j TRACE target in iptables, consult also the man for xtables-monitor, because iptables-over-nftables changes its behaviour (compared to iptables-legacy).While I answered OP's question, here are wild guesses about both issues and log issues:if Docker itself is running within a container, logs might not be available. They can be made available to the host, and to all containers allowed to query the kernel messages, with sysctl -w net.netfilter.nf_log_all_netns=1, simply because kernel messages don't have namespace instances.the counter at the log rule in ip filter INPUT is zero, while the counter at the previous rule with a drop statement is not. That means the log rule is made too late: after drop. The log rule (or rather iptables's -j LOG) should be inserted before the final drop statement, not appended after where it will never be reached.The only INPUT rule about Docker is iifname "docker0" counter packets 0 bytes 0 accept. If the containers are not on the default Docker network, there's no rule allowing them to reach the host.
Try adding a rule for testing this. Be sure the result is inserted before the drop rule. Use iptables, avoid adding a rule with nftables that could be incompatible with iptables-over-nftables:
iptables -I INPUT 8 -i "br-*" -j ACCEPT |
On Debian 10 buster I am having problems with docker containers unable to ping the docker host or even docker bridge interface, but able to reach the internet.
Allowing access as in related questions here, doesn't fix it in my case.
Seems iptables/nftables related, and I can probably figure out what to do, if I could first figure out how to log the errors.
I put in the log rules in both DOCKER-USER and INPUT, with likes of
nft insert rule ip filter DOCKER-USER counter log but they all show 0 packets logged.
/var/log/kern.log doesn't show any firewall related info, and neither does journalctl -k.
How is the new way to view firewall activity with this nftables system?
nft list ip table filtertable ip filter {
chain INPUT {
type filter hook input priority 0; policy drop;
ct state invalid counter packets 80 bytes 3200 drop
iifname "vif*" meta l4proto udp udp dport 68 counter packets 0 bytes 0 drop
ct state related,established counter packets 9479197 bytes 17035404271 accept
iifname "vif*" meta l4proto icmp counter packets 0 bytes 0 accept
iifname "lo" counter packets 9167 bytes 477120 accept
iifname "vif*" counter packets 0 bytes 0 reject with icmp type host-prohibited
counter packets 28575 bytes 1717278 drop
counter packets 0 bytes 0 log
counter packets 0 bytes 0 log
iifname "docker0" counter packets 0 bytes 0 accept
} chain FORWARD {
type filter hook forward priority 0; policy drop;
counter packets 880249 bytes 851779418 jump DOCKER-ISOLATION-STAGE-1
oifname "br-cc7b89b40bee" ct state related,established counter packets 7586 bytes 14719677 accept
oifname "br-cc7b89b40bee" counter packets 0 bytes 0 jump DOCKER
iifname "br-cc7b89b40bee" oifname != "br-cc7b89b40bee" counter packets 5312 bytes 2458488 accept
iifname "br-cc7b89b40bee" oifname "br-cc7b89b40bee" counter packets 0 bytes 0 accept
oifname "br-d41d1510d330" ct state related,established counter packets 8330 bytes 7303256 accept
oifname "br-d41d1510d330" counter packets 0 bytes 0 jump DOCKER
iifname "br-d41d1510d330" oifname != "br-d41d1510d330" counter packets 7750 bytes 7569465 accept
iifname "br-d41d1510d330" oifname "br-d41d1510d330" counter packets 0 bytes 0 accept
oifname "br-79fccb9a0478" ct state related,established counter packets 11828 bytes 474832 accept
oifname "br-79fccb9a0478" counter packets 11796 bytes 707760 jump DOCKER
iifname "br-79fccb9a0478" oifname != "br-79fccb9a0478" counter packets 7 bytes 526 accept
iifname "br-79fccb9a0478" oifname "br-79fccb9a0478" counter packets 11796 bytes 707760 accept
counter packets 1756295 bytes 1727495359 jump DOCKER-USER
oifname "docker0" ct state related,established counter packets 1010328 bytes 1597833795 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 284235 bytes 16037499 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
ct state invalid counter packets 0 bytes 0 drop
ct state related,established counter packets 0 bytes 0 accept
counter packets 0 bytes 0 jump QBS-FORWARD
iifname "vif*" oifname "vif*" counter packets 0 bytes 0 drop
iifname "vif*" counter packets 0 bytes 0 accept
counter packets 0 bytes 0 drop
} chain OUTPUT {
type filter hook output priority 0; policy accept;
} chain QBS-FORWARD {
} chain DOCKER {
} chain DOCKER-ISOLATION-STAGE-1 {
iifname "br-cc7b89b40bee" oifname != "br-cc7b89b40bee" counter packets 5312 bytes 2458488 jump DOCKER-ISOLATION-STAGE-2
iifname "br-d41d1510d330" oifname != "br-d41d1510d330" counter packets 7750 bytes 7569465 jump DOCKER-ISOLATION-STAGE-2
iifname "br-79fccb9a0478" oifname != "br-79fccb9a0478" counter packets 7 bytes 526 jump DOCKER-ISOLATION-STAGE-2
iifname "docker0" oifname != "docker0" counter packets 590138 bytes 34612496 jump DOCKER-ISOLATION-STAGE-2
counter packets 1808904 bytes 1760729363 return
} chain DOCKER-ISOLATION-STAGE-2 {
oifname "br-cc7b89b40bee" counter packets 0 bytes 0 drop
oifname "br-d41d1510d330" counter packets 0 bytes 0 drop
oifname "br-79fccb9a0478" counter packets 0 bytes 0 drop
oifname "docker0" counter packets 0 bytes 0 drop
counter packets 644929 bytes 74784737 return
} chain DOCKER-USER {
counter packets 0 bytes 0 log
iifname "docker0" counter packets 305903 bytes 18574997 accept
counter packets 1450392 bytes 1708920362 return
}
} | How to properly log and view nftables activity? |
As usual nftables is a moving target. This feature appeared in nftables 0.9.4 released on 2020-04-01:NAT mappings with concatenations. This allows you to specify the address and port to be used in the NAT mangling from maps, eg.
nft add rule ip nat pre dnat ip addr . port to ip saddr map { 1.1.1.1 : 2.2.2.2 . 30 }You can also use this new feature with named sets:
nft add map ip nat destinations { type ipv4_addr . inet_service : ipv4_addr . inet_service \; }
nft add rule ip nat pre dnat ip addr . port to ip saddr . tcp dport map @destinationsSo in OP's case that would be:
nft add ip dnatTable dnatChain 'dnat ip addr . port to tcp dport map { 80 : 172.17.0.3 . 8080 }'Just for information, nftables 1.0.0 released on 2021-08-19 has a simplified syntax:Simplify syntax for NAT mappings. You can specify an IP range:[...]
Or a specific IP and port.
... dnat to ip saddr map { 10.141.11.4 : 192.168.2.3 . 80 }So OP's case can then be shortened to (and previous rule will be displayed as):
nft add ip dnatTable dnatChain 'dnat to tcp dport map { 80 : 172.17.0.3 . 8080 }'Using a named map can help to separate rules from data. This would work too instead (using 0.9.4 syntax):
nft add map ip dnatTable portToIpPort '{ type inet_service : ipv4_addr . inet_service ; }'
nft add rule ip dnatTable dnatChain dnat ip addr . port to tcp dport map @portToIpPortand then at any convenient time:
nft add element ip dnatTable portToIpPort '{ 80 : 172.17.0.3 . 8080 }' |
Is it possible to have an nftables map which maps port to ipv4_addr:port, where port and ipv4_addr:port have different TCP port numbers? For example, I want to dnat all incoming packets on port 80 to a container running a web server on port 8080 (purely with nftables). This is possible using two maps and two map lookups, as below:
table ip dnatTable {
chain dnatChain {
type nat hook prerouting priority dstnat; policy accept;
dnat to tcp dport map { 80 : 172.17.0.3 }:tcp dport map { 80 : 8080 }
}
}However, I was wondering if it is possible with only a single map lookup?
Thanks
| nftables map `port` to `ip:port` for DNAT |
The hydra tool connects concurrently multiple times to the SSH server. In OP's case (comment: hydra -l <username> -P </path/to/passwordlist.txt> -I -t 6 ssh://<ip-address>) it will use 6 concurrent threads connecting.
Depending on server settings, one connection could typically try 5 or 6 passwords and taking about 10 seconds before being rejected by the SSH server, so I fail to see how a rate of 10 connection attempts per second could be exceeded (but that's the case). It could mean that what triggers is that more than 5 connection attempts are done in less than 1/2s. I wouldn't trust too much the accuracy of 10/s, but it can be assumed it happens here.
Version and syntax issues
The syntax not working with versions 0.8.1 or 0.8.3 is a newer syntax that appeared in this commit:src: revisit syntax to update sets and maps from packet path
For sets, we allow this:
nft add rule x y ip protocol tcp update @y { ip saddr}[...]It was committed after version 0.8.3 so available only with nftables >= 0.8.4
The current wiki revision for Updating sets from the packet path, in the same page, still displays commands with the former syntax % nft add rule filter input set add ip saddr @myset[...]and results displayed with the newer syntax:[...]
add @myset { ip saddr }[...]Some wiki pages or the latest manpage might not work with older nftables versions.
Anyway, if running with kernel 4.19, nftables >= 0.9.0 should be preferred to get additional features. For example it's available in Debian 10 or in Debian 9 backports.
Blacklisting should be done before stateful accept rules
Once the IP is added to the blacklist, this doesn't prevent established connections to continue, unhindered and unaccounted, until they're disconnected by the SSH server itself. That's because there's the usual existing short-circuit rule before: # accept traffic originating from us
ct state established,related acceptThis comment is misleading: it doesn't accept traffic originating from us but any traffic already ongoing. This is a short-circuit rule. Its role is to handle stateful connections by parsing all rules only for new connections: any rule after this one applies to new connections. Once connections are accepted, their individual packets stay accepted until end of connection.
For the specific case of blacklist handling, specific blacklist rules or part of them should be placed before this short-circuit rule to be able to take effect immediately. In OP's case that is: ip saddr @blackhole counter dropIt should be moved before the ct state established,related accept rule.
Now once the attacker is added to the blacklist, other ongoing connections won't get some remaining free attempts at guessing a password: they'll immediately hang.
If there's a blacklist, consider a whitelist
As a side note, the cheap iif lo accept rule could itself be moved before both as an optimization and to be whitelisted: all (even long lived) local established connection will also now be subject to blacklisting in case of abuse (eg: from 127.0.0.1). Consider adding various whitelisting rules before the @blackhole rule.
Optionally warn applications faster
To also prevent ongoing replies from server to reach the blacklisted IP (especially for UDP traffic, not that useful for TCP, including SSH), the equivalent rule using daddr can also be added in the inet filter output chain, with reject to inform faster the local processes trying to emit that they should abort:
ip daddr @blackhole counter rejectDifference between add and update applied on a set
Now with such settings in place, even if ongoing connections are immediately stopped, the attacker is able to keep trying and get a new short window 1mn later, which is not optimal.
The entries must be updated in the input @blackhole ... drop rule. update will refresh the timer if the entry already existed, while add would do nothing. This will keep blocking any further (unsuccessful) attempt to connect to the SSH server until attacker gives up, with zero opened window. (The output rule I added above shouldn't be changed, it's not the attacker's actions):
replace:ip saddr @blackhole counter dropwith (still keeping older syntax):
ip saddr @blackhole counter set update ip saddr timeout 1m @blackhole dropIt should even be moved before the ct state invalid rule, else if attacker tries invalid packets (eg TCP packet not part of a known connection, like a late RST from an already forgotten connection), the set won't be updated while it could have been.
Limit the maximum number of established connections
Requires kernel >= 4.18 and nftables >= 0.9.0, so can't be done with OP's current configuration.
The attacker might discover it can't connect too many times at once but can still keep adding, without limit, new connections, as long as not connecting too fast.
A limit on concurrent connections (as available with iptables's connlimit) can also be added with an other meter rule:
tcp flags syn tcp dport 22 meter toomanyestablished { ip saddr ct count over 3 } reject with tcp resetwill allow any given IP address to have only 3 established SSH connections.
Or while at it, instead, also trigger the @blackhole set (using newer syntax this time):
tcp flags syn tcp dport 22 meter toomanyestablished { ip saddr ct count over 3 } add @blackhole { ip saddr timeout 1m } dropThis should trigger even before the previous meter rule in OP's case. Use with care to avoid legitimate users to be affected (but see openssh's ControlMaster option).
IPv4 and IPv6
As there's no generic IPv4+IPv6 set address type, all rules handling IPv4 (whenever there's the 2-letters word ip) should probably be duplicated into a mirror rule having ip6 in them and working on an IPv6 set.
|
I want to create a dynamic blacklist with nftables. Under version 0.8.3 on the embedded device I create a ruleset looks like this with nft list ruleset:
table inet filter {
set blackhole {
type ipv4_addr
size 65536
flags timeout
}chain input {
type filter hook input priority 0; policy drop;
ct state invalid drop
ct state established,related accept
iif "lo" accept
ip6 nexthdr 58 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, echo-reply, mld-listener-query, mld-listener-report, mld-listener-done, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { echo-reply, destination-unreachable, echo-request, router-advertisement, router-solicitation, time-exceeded, parameter-problem } accept
ip saddr @blackhole counter packets 0 bytes 0 drop
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 1m limit rate over 10/second burst 5 packets} set add ip saddr timeout 1m @blackhole drop
tcp dport ssh accept
}chain forward {
type filter hook forward priority 0; policy drop;
}chain output {
type filter hook output priority 0; policy accept;
}
}For me this is only a temporary solution. I want to use the example from the official manpage for dynamic blacklisting. If I use the offical example from the manpage my nftables file looks like this:
table inet filter {
set blackhole{
type ipv4_addr
flags timeout
size 65536
}
chain input {
type filter hook input priority 0; policy drop; # drop invalid connections
ct state invalid drop # accept traffic originating from us
ct state established,related accept # accept any localhost traffic
iif lo accept # accept ICMP
ip6 nexthdr 58 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, echo-reply, mld-listener-query, mld-listener-report, mld-listener-done, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { destination-unreachable, router-solicitation, router-advertisement, time-exceeded, parameter-problem, echo-request, echo-reply } accept # accept SSH (port 22)
ip saddr @blackhole counter drop
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 10s limit rate over 10/second} add @blackhole { ip saddr timeout 1m } drop
tcp dport 22 accept}chain forward {
type filter hook forward priority 0; policy drop;
}chain output {
type filter hook output priority 0; policy accept;
}}But when I load this nftables file on version 0.8.3 with nft -f myfile I get this error:
Error: syntax error, unexpected add, expecting newline or semicolon
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 10s limit rate over 10/second} add @blackhole { ip saddr timeout 1m } dropI don't know why this is the case, but according to the wiki it should work from version 0.8.1 and kernel 4.3.
I have version 0.8.3 and kernel 4.19.94.
I have tested under Debian Buster the ruleset from the official manpage with version 0.9.0. The ruleset from the manpage works fine with Debian, but the ip is blocked only once.
With this example I want to create a firewall rule which blocks the ip adress on ssh port if an brute force attack is started to my device. But I want to block the ip e.g for 5 minutes. After that time it should be possible to connect again to the device from the attackers ip. If he do brute force again it should be block the ip again for 5 minutes and so on. I want to avoid to use another software for my embedded device like sshguard or fail2ban if it is possible with nftables.
I hope anyone can help me. Thanks!
| Create dynamic blacklist with nftables |
You are not translating the port number. When the external connection is to port 1234, this is not a problem. But when it is to 4321, the dnat passes through to port 4321 on the internal server, not port 1234. Try
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200:1234;You do not need to translate the reply packets coming back from your internal server. This is done automagically using the entry in the connection tracking table that is created on the first syn packet.
|
I have an OpenWRT gateway (self-built 19.07, kernel 4.14.156) that sits on a public IP address in front of my private network. I am using nftables (not iptables).
I would like to expose a non-standard port on the public address, and forward it to a standard port on a machine behind the gateway. I think this used to be called port forwarding: it would look like your gateway machine was providing, say, http service, but it was really a machine behind the gateway on a private address.
Here is my nftables configuration. For these purposes, my "standard service" is on port 1234, and I want to allow the public to access it at gateway:4321.
#!/usr/sbin/nft -ef
#
# nftables configuration for my gateway
#flush rulesettable raw {
chain prerouting {
type filter hook prerouting priority -300;
tcp dport 4321 tcp dport set 1234 log prefix "raw " notrack;
}
}table ip filter {
chain output {
type filter hook output priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "output ";
} chain input {
type filter hook input priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "input " accept;
} chain forward {
type filter hook forward priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "forward " accept;
}
}table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200;
} chain postrouting {
type nat hook postrouting priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-post ";
oifname "eth0" masquerade;
}
}Using this setup, external machines can access the private machine at gateway:1234. Logging shows nat-pre SYN packet from external to gateway IP, then forward from external to internal IP, then nat-post from external to internal, and 'existing-connection` takes care of the rest of the packets.
External machines connecting to gateway:4321 log as raw, where the 4321 gets changed to 1234. Then the SYN packet gets forwarded to the internal server, the reply SYN packet comes back, and ... nothing!
The problem, I think, is that I'm not doing the nftables configuration that would change the internal:1234 back to gateway:4321, which the remote machine is expecting. Even if masquerade changes internal:1234 to gateway:1234, the remote machine is not expecting that, and will probably dump it.
Any ideas for this configuration?
| Port forwarding & NAT with nftables |
You are most certainly running iptables over nftables, as this is the default on Debian buster. To confirm this is the case, check for (nf_tables):
# ip6tables-restore --version
ip6tables-restore v1.8.2 (nf_tables)Now in the ip6tables manual, there always has been:-4, --ipv4
This option has no effect in iptables and iptables-restore. If a rule using the -4 option is inserted with (and only with)
ip6tables-restore, it will be silently ignored. Any other uses will
throw an error. This option allows IPv4 and IPv6 rules in a single
rule file for use with both iptables-restore and ip6tables-restore.The trouble is that you're now running ip6tables-nft-restore rather than ip6tables-legacy-restore.
There is no mention of -4 in differences to legacy iptables, meaning there shouldn't be a difference about it, but here it is. This really looks like a bug: either the new version ip6tables-nft-restore should cope with it, or the documentation should reflect it as an additional difference to be acceptable.
By the way the other way around (-6 with iptables-nft-restore) doesn't look better: it's accepted instead of ignored, leading to -A INPUT -p ipv6-icmp -j ACCEPT in addition to -A INPUT -p icmp -j ACCEPT in IPv4 protocol (this will never happen, except maybe with a custom test, and the IP stack will ignore it anyway).
Possible workarounds:file a bug report, insisting on a regression which would break existing rules and documentation. This would help other people too.
split rules
split your file into two files but apply a different filter to each, something like:
grep -v -- '^ *-4 ' < before > after.v6
grep -v -- '^ *-6 ' < before > after.v4create a wrapper for ip6tables-restore in /usr/local/sbin/ip6tables-restore doing about the same (and also do the same for iptables-restore), allowing to keep a single rule
Give up (for now) iptables over nftables and revert to legacy iptables:
# readlink -f $(which ip6tables-restore)
/usr/sbin/xtables-nft-multi
# update-alternatives --config ip6tables
There are 2 choices for the alternative ip6tables (providing /usr/sbin/ip6tables). Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/sbin/ip6tables-nft 20 auto mode
1 /usr/sbin/ip6tables-legacy 10 manual mode
2 /usr/sbin/ip6tables-nft 20 manual modePress <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/sbin/ip6tables-legacy to provide /usr/sbin/ip6tables (ip6tables) in manual mode
# readlink -f $(which ip6tables-restore)
/usr/sbin/xtables-legacy-multiThe link of the related command also changed, fine.
Do the same with iptables.
Current rules are still running over nftables. You can dump them with iptables-nft-save + ip6tables-nft-save and restore them with iptables-save + ip6tables-save. This will result in rules running twice: once with kernel's iptables backend, once with kernel's nftables backend, and NAT might not always work correctly with this on kernel 4.19 (usually the first loaded module wins: here nft_nat). Better reboot, or know how to flush rules and remove relevant (nat) nftables modules.
embrace the new features and use directly nft.
There are commands available to help here (but they have the same problem as above): iptables-translate / ip6tables-translate and iptables-restore-translate / ip6tables-restore-translate, but the result usually needs reworking anyway (especially with fancy matches like u32). Nftables has a family type inet which can actually mix IPv4 and IPv6 rules (might require a newer kernel for this in nat), so it would simplify things. |
I have the below iptable rule in /etc/iptables/rule.V6 and /etc/iptables/rule.V4
-4 -A INPUT -p icmp -j ACCEPT
-6 -A INPUT -p ipv6-icmp -j ACCEPTwhen I tried to restart the netfilter-persistent, it internally calls the iptables-restore and ip6tables-restore.
ip6tables-restore failed because it couldn't understand the below rule
-4 -A INPUT -p icmp -j ACCEPTBelow is the error
root@rs-dal:/etc/iptables# ip6tables-restore rules.q
Error occurred at line: 15
Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.Ideally the rule that starts with -4 will be ignored by the ip6tables-restore, but that doesn't seems to be working in Debian Buster.
But, iptables-restore worked fine, it is only the issue with ip6tables-restore.
How to fix this issue?
| ip6table-restore failed in Debian buster/sid |
Here's my nftables cheat sheet:Load rules: nft -f /etc/sysconfig/nftables.conf (this will append them to the existing ones, so flushing first might be required)
Watch rules: nft list ruleset
Reset rules: nft flush rulesetSpeaking of your request:
nft list ruleset | grep dport
Since tables and chains can be called pretty much anything, it's kinda hard to devise a script which will list only rules for type filter hook input.
|
It is simple to list all open ports and its services with firewll-cmd:
sudo firewall-cmd --list-all How can get the list with nftables?
| How to list all ports and service with nftables? |
It's still possible to use nftables in the netdev family (rather than ip family) for this case, since only ingress is needed (nftables still doesn't have egress available). The behaviour of dup and fwd in the ingress hook is exactly the same as tc-mirred's mirror and redirect.
I also addressed a minor detail: rewrite the Ethernet source address to the new Ethernet outgoing interface's MAC address, as would have been done for a truly routed packet, even if it works for you without this. So the interfaces' MAC addresses has to be known beforehand. I put the two required (eth0's and eth1's) in variables/macro definitions, which should be edited with the correct values.
define eth0mac = 02:0a:00:00:00:01
define eth1mac = 02:0b:00:00:00:01table netdev statelessnat
delete table netdev statelessnattable netdev statelessnat {
chain b { type filter hook ingress device eth1 priority 0;
pkttype broadcast ether type ip ip daddr 192.168.1.255 udp dport 21027 jump b-to-a
} chain c { type filter hook ingress device eth2 priority 0;
pkttype broadcast ether type ip ip daddr 192.168.2.255 udp dport 21027 counter jump c-to-b-a
} chain b-to-a {
ether saddr set $eth0mac ip daddr set 192.168.0.255 fwd to eth0
} chain c-to-b-a {
ether saddr set $eth1mac ip daddr set 192.168.1.255 dup to eth1 goto b-to-a
}
} |
We have a Debian Buster box (nftables 0.9.0, kernel 4.19) attached to four different network segments. Three of these segments are home to devices running Syncthing, which runs its own local discovery via broadcasts to UDP port 21027. The devices thus can't all "see" each other as the broadcasts don't cross segments; the Buster box itself does not participate in the sync cluster.
While we could solve this by running Syncthing's discovery or relay servers on the Buster box, it's been requested that we not use them (reasons around configuration and devices which roam to other sites). Hence, we're looking at a nftables-based solution; my understanding is that this isn't normally done, but to make this work, we have to:Match incoming packets on UDP 21027
Copy those packets to the other segment interface(s) they need to be seen on
Change the destination IP of the new packet(s) to match the new segment's broadcast address (while preserving the source IP as the discovery protocol can rely on it)
Emit the new broadcasts without them getting duplicated againOnly three of the attached segments participate with devices; all are subnet masked as /24.Segment A (eth0, 192.168.0.1) should not be forwarded
Segment B (eth1, 192.168.1.1) should be forwarded to segment A only
Segment C (eth2, 192.168.2.1) should be forwarded to both A and BThe closest we have to a working rule for this so far is (other DNAT/MASQ and local filtering rules omitted for brevity):
table ip mangle {
chain repeater {
type filter hook prerouting priority -152; policy accept;
ip protocol tcp return
udp dport != 21027 return
iifname "eth1" ip saddr 192.168.2.0/24 counter ip daddr set 192.168.1.255 return
iifname "eth0" ip saddr 192.168.2.0/24 counter ip daddr set 192.168.0.255 return
iifname "eth0" ip saddr 192.168.1.0/24 counter ip daddr set 192.168.0.255 return
iifname "eth2" ip saddr 192.168.2.0/24 counter dup to 192.168.0.255 device "eth0" nftrace set 1
iifname "eth2" ip saddr 192.168.2.0/24 counter dup to 192.168.1.255 device "eth1" nftrace set 1
iifname "eth1" ip saddr 192.168.1.0/24 counter dup to 192.168.0.255 device "eth0" nftrace set 1
}
}The counters show that the rules are being hit, though without the daddr set rules the broadcast address remains the same as on the originating segment. nft monitor trace shows least some packets are reaching the intended interface with the correct destination IP, but are then landing in the input hook for the box itself and are not seen by other devices on the segment.
Is the outcome we're looking for here achievable in practice, and if so, with which rules?
| nftables: duplicate broadcast packets between segments |
I think what they mean is that the accept will end that specific hook, but another may stop it. For example, looking at this illustration, if the Forward Hook were to accept, but then the Postrouting Hook were to drop, that would satisfy your latter quote because it is "another hook."
(This is speculation because I only have experience in ipchains/iptables but it looks similar enough and IIRC worked in a similar manner.)
|
I have problems understanding verdict statements in nftables. From man nft, or from here, shortly below the heading "Verdict statement", we read the following:accept and drop are absolute verdicts --- they terminate ruleset evaluation immediately.but then, in the next sentence (emphasizing mine):accept: Terminate ruleset evaluation and accept the packet. The packet can still be dropped later by another hook [...]I can't help but isn't that a contradiction in itself? Which one is true? Does accept terminate ruleset evaluation immediately, or does it not? Only one of the statements cited can be true.
I am especially interested in the behavior of accept statements in ingress hooks.
| In nftables, is the verdict statement "accept" final or not? |
Topology
We start by configuring the network interfaces on middleman. I'm assuming that either you're logged in on the system console or you have access via an interface that's not involved in either the inner or outer networks. For the purpose of this answer, we're going to assume that interface middleman-eth0 on middleman is connected to the "inner" network and middleman-eth1 is connected to the "outer" network. This gives us the following network topology:Enable forwarding
We need to ensure that we have enabled packet forwarding on middleman:
sysctl -w net.ipv4.ip_forward=1And we should start with an empty netfilter configuration. Running iptables-save should produce no output.
Interface configuration
For this to work, both middleman-eth0 and middleman-eth1 will have identical network configurations:
ip addr add 192.168.2.1/24 dev middleman-eth0
ip addr add 192.168.2.1/24 dev middleman-eth1If you think that looks weird, you're right! At the moment, the routing table on middleman looks like this:
192.168.2.0/24 dev middleman-eth1 proto kernel scope link src 192.168.2.1
192.168.2.0/24 dev middleman-eth0 proto kernel scope link src 192.168.2.1That's not going to be particularly useful.
VRF configuration
We're going to take advantage of Linux's support for "virtual routing and forwarding" ("VRF"). This allows us to create multiple isolated routing domains on a system, so that traffic coming in on eth0 will see a different routing table than traffic coming in on eth1.
We first create the VRF interfaces:
ip link add vrf-inner type vrf table 100
ip link set vrf-inner up
ip link add vrf-outer type vrf table 200
ip link set vrf-outer upThese commands set up two VRF devices, associating each one with a specific routing table.
Next, we attach each of our physical interfaces to a VRF devices:
ip link set dev middleman-eth0 master vrf-inner
ip link set dev middleman-eth1 master vrf-outerWith these changes, the primary routing table is now empty:
middleman# ip route show
<no output>In table 100 we see the rules associated with middleman-eth0 (the "inner" network):
middleman# ip route show table 100
broadcast 192.168.2.0 dev middleman-eth0 proto kernel scope link src 192.168.2.1
192.168.2.0/24 dev middleman-eth0 proto kernel scope link src 192.168.2.1
local 192.168.2.1 dev middleman-eth0 proto kernel scope host src 192.168.2.1
broadcast 192.168.2.255 dev middleman-eth0 proto kernel scope link src 192.168.2.1And in table 200 we see the rules for middleman-eth1 (the "outer" network):
middleman# ip route show table 200
broadcast 192.168.2.0 dev middleman-eth1 proto kernel scope link src 192.168.2.1
192.168.2.0/24 dev middleman-eth1 proto kernel scope link src 192.168.2.1
local 192.168.2.1 dev middleman-eth1 proto kernel scope host src 192.168.2.1
broadcast 192.168.2.255 dev middleman-eth1 proto kernel scope link src 192.168.2.1At this point, we effectively have two disconnected networks that look like this:Hosts on the "inner" network can contact 192.168.2.1, and they will be talking to middleman-eth0. Hosts on the "outer" network can also contact 192.168.2.1, but they will be talking to middleman-eth1.
In which the twain do meet
All that we need to do now is set up the mapping so that either side can use addresses from 192.168.3.0/24 to contact nodes on the other side.
First, we need to tell all the nodes that they route to the 192.168.3.0/24 network via middleman; that means on all the nodes, on both the "inner" and "outer" networks, we need:
ip route add 192.168.3.0/24 via 192.168.2.1On middleman, we need to (a) map addresses in the 192.168.3.0/24 range into the 192.168.2.0/24 range and (b) ensure that when we route the connection we use the correct routing table. To accomplish (a) we can create some NETMAP rules:
iptables -t nat -A PREROUTING -d 192.168.3.0/24 -j NETMAP --to 192.168.2.0/24
iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -j NETMAP --to 192.168.3.0/24To accomplish (b), we'll first mark packets based on their ingress interface:
iptables -t mangle -A PREROUTING -i middleman-eth0 -d 192.168.3.0/24 -j MARK --set-mark 100
iptables -t mangle -A PREROUTING -i middleman-eth1 -d 192.168.3.0/24 -j MARK --set-mark 200And then use those marks to select a routing table:
ip rule add prio 100 fwmark 100 lookup 200
ip rule add prio 200 fwmark 200 lookup 100Recall from earlier that table 100 has the rules for the "inner" network and table 200 has the rules for the "outer" network, so these rules say "for packets arriving on interface middleman-eth0, make a routing decision using the routing table associated with middleman-eth1", and vice-versa.
Following the bouncing ball
With all this in place, if node 192.168.2.10 on the "inner" networks tries to ping 192.168.3.10:The packet gets routed to middleman because of the 192.168.3.0/24 via 192.168.2.1 route entry
The packet arrives at middleman-eth0
The packet hits the MANGLE table PREROUTING chain and has the fwmark set to 100
The packet hits the NAT table PREROUTING chain and has the destination mapped to 192.168.2.10
The packet enters the routing subsystem, where it hits the fwmark 100 lookup 200 rule
In routing table 200, it hits the 192.168.2.0/24 dev middleman-eth1, so the kernel will send it out device middleman-eth1
The packet hits the NAT table POSTROUTING chain, where it has its source mapped to 192.168.3.10.
The packet arrives at the "outer" node with address 192.168.2.10.
...take a deep breath...
The outer node sends a reply to 192.168.3.10
The reply arrives at middleman-eth1
The reply hits the MANGLE table PREROUTING chain and has the fwmark set to 200
The reply hits the NAT table PREROUTING chain and has the destination mapped to 192.168.2.10
The reply enters the routing subsystem, where it hits the fwmark 200 lookup 100 rule
In routing table 100, it hits the 192.168.2.0/24 dev middleman-eth0 rule, so the kernel will send it out device middleman-eth0
The reply hits the NAT table POSTROUTING chain, where it has its source mapped to 192.168.3.10
The reply arrives at "inner" node 192.168.2.10, which sees a reply to the request it initially sent out.Validation
If on "inner" node 0 (192.168.2.10) we attempt to ping "outer" node 0 using address 192.168.3.10, running tcpdump -nn -i any icmp on inner node 0 shows:
07:01:58.125370 innernode0-eth0 Out IP 192.168.2.10 > 192.168.3.10: ICMP echo request, id 12999, seq 1, length 64
07:01:58.125533 innernode0-eth0 In IP 192.168.3.10 > 192.168.2.10: ICMP echo reply, id 12999, seq 1, length 64On middleman we see:
07:01:58.125440 middleman-eth0 In IP 192.168.2.10 > 192.168.3.10: ICMP echo request, id 12999, seq 1, length 64
07:01:58.125459 middleman-eth1 Out IP 192.168.3.10 > 192.168.2.10: ICMP echo request, id 12999, seq 1, length 64
07:01:58.125514 middleman-eth1 In IP 192.168.2.10 > 192.168.3.10: ICMP echo reply, id 12999, seq 1, length 64
07:01:58.125518 middleman-eth0 Out IP 192.168.3.10 > 192.168.2.10: ICMP echo reply, id 12999, seq 1, length 64And on "outer" node 0 we see:
07:01:58.125489 outernode0-eth0 In IP 192.168.3.10 > 192.168.2.10: ICMP echo request, id 12999, seq 1, length 64
07:01:58.125497 outernode0-eth0 Out IP 192.168.2.10 > 192.168.3.10: ICMP echo reply, id 12999, seq 1, length 64So I think we have accomplished your goal!I used mininet to test this configuration; you can find the complete sources for my test environment here. There is a video of this configuration in action here.
Update
As A.B. points out in comments, there is a problem with this configuration! By default, the kernel's connection tracking logic looks only at source/destination address and source/destination port. A connection from innernode0 port 4000 to outernode0 port 80 would appear to be the same connection as one in the opposite direction...that is, assuming that I have a webserver running on port 80 on all the nodes, these two commands:
innernode0# curl --local-port 4000 192.168.3.10And:
outernode0# curl --local-port 4000 192.168.3.10Would result in a single connection tracking entry on middleman:
middleman# conntrack -L
tcp 6 118 TIME_WAIT src=192.168.2.10 dst=192.168.3.10 sport=4000 dport=80 src=192.168.2.10 dstroot@mininet-vm:/proc/net=192.168.3.10 sport=80 dport=4000 [ASSURED] mark=0 use=1We to tell the conntrack subsystem how to differentiate this connections. We can do that by adding a pair of CT rules to the PREROUTING chain in the RAW table:
iptables -t raw -A PREROUTING -s 192.168.2.0/24 -i middleman-eth0 -j CT --zone-orig 100
iptables -t raw -A PREROUTING -s 192.168.2.0/24 -i middleman-eth1 -j CT --zone-orig 200With these rules in place, we now see two separate connections in the conntrack table:
middleman# conntrack -L
tcp 6 113 TIME_WAIT src=192.168.2.10 dst=192.168.3.10 sport=4000 dport=80 zone-orig=200 src=192.168.2.10 dst=192.168.3.10 sport=80 dport=40568 [ASSURED] mark=0 use=1
tcp 6 112 TIME_WAIT src=192.168.2.10 dst=192.168.3.10 sport=4000 dport=80 zone-orig=100 src=192.168.2.10 dst=192.168.3.10 sport=80 dport=4000 [ASSURED] mark=0 use=1 |
Given the following diagram, is it possible to route traffic through the linux kernel like this? I wish to simulate an exact copy of the devices outside of my "inner" network, with the same IP-ranges whilst enabling the inner and outer devices to communicate with each other without knowing that the other has the same IP as itself.For example: Device X on the "inside" contacts 192.168.3.5, this goes to the middleman-bridge and gets forwarded to device Y on the "outside" with IP 192.168.2.5. The response is then sent back to middleman, and sent to device X with IP 192.168.2.5.
I know that this is possible with network-namespaces and have a working simulation with that. However, I wish to avoid namespaces and instead use something like different routing tables for the different directions of traffic. Is this possible?
If I have understood it correctly, I cannot use NAT because of the duplicated IP-ranges. Is this correct?
| Routing traffic to virtual copy of network without namespaces |
Using nft, to drop all ethernet frames which are received from network interface eth0 and do not have source address of 00:00:5e:00:53:00:
nft add rule filter input iif eth0 ether saddr != 00:00:5e:00:53:00 dropiptables allows similar filtering with mac extension:
iptables -A INPUT -i eth0 -m mac ! --mac-source 00:00:5e:00:53:00 -j DROP |
I have two systems that are connected over Ethernet and one system has a WiFi radio which provides internet access.
In this diagram sys#2 has the WiFi radio
sys#1 <---> Ethernet <---> sys#2 <---> wifi <---> internetsys#2 routes packages from Ethernet (interface eth0) out to wifi (interface wlan0). sys#2 connects to an AP, sys#2 does not act as an AP itself. I'm looking for a way to use nftables or iptables to filter and drop packets on sys#2 which do not have a MAC matching sys#1. I'd be more interested in dropping packets arriving on Ethernet. What I'm trying to protect is to prevent someone from plugging into sys#2 and gaining accesses to the WiFi network.
Is this possible with nftables? I know someone could spoof their MAC to get around this but this is just a temporary measure until we can secure the connection with IPSEC or a VPN.
Edit:
Running the suggested nft command results in an error:
# nft add rule filter input iif eth0 ether saddr != A4:A3:A3:00:00:00 drop
<cmdline>:1:1-68: Error: Could not process rule: No such file or directory
# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 metric 1
inet 10.0.1.1 netmask 255.255.255.0 broadcast 10.0.1.255
... | Is it possible to filter/drop packets by MAC using nftables |
There are recent patches allowing to do just that, but they are not yet available in any release.
Here are the relevant patches (but they are part of series, they would probably not be enough te be applied by themselves).
linux kernel:
[v5,2/2] netfilter: nft_meta: support for time matching
libnftnl (userland low level nftables library):
[libnftnl,v2,1/2] expr: meta: Make NFT_META_TIME_{NS,DAY,HOUR} known
nftables (userland command):
[nft,2/4] meta: Introduce new conditions 'time', 'day' and 'hour'Some usage examples:
time < "2019-06-06 17:00" drop;
time < "2019-06-06 17:20:20" drop;
time < 12341234 drop;
day "Saturday" drop;
day 6 drop;
hour >= 17:00 drop;
hour >= "17:00:01" drop;
hour >= 63000 drop;[...]
We swap left and right values in a range to properly handle cross-day
hour ranges (e.g. 23:15-03:22).While the kernel patch was submitted on 2019-08-17, it had to go through nf-next, net-next and was merged for 5.4-rc1
on 2019-09-18. Kernel 5.4 should probably be out in a few weeks.
So according to the example above, while I couldn't test it yet, this should probably be a method to drop incoming connections to the local mail server between 00:00 and 04:00, once running with kernel 5.4, libnftnl 1.1.5? and nftables 0.9.3?:
#!/usr/sbin/nft -ftable inet filter {
chain input {
type filter hook input priority 0; policy accept;
tcp dport 25 hour 00:00-04:00 drop
}
} |
I would like to e.g. block certain traffic between 00:00 and 04:00. Is this possible in nftables?
(Obviously I can just set a cron job that changes the config at these times - but I would like to know if there is a nftables "native" way of achieving this.)
| How do you filter packets differently based on time of day in nftables? |
After some discussions on the #netfilter irc channel, it turns out that things are function "as designed". It is not possible for one chain to provide broader access (in the form of accept rules) than that provided by a chain with a reject (or drop) rule.
An accept verdict is only valid in the chain in which it occurrs, and it does not prevent packet processing from continuing in higher priority chains.
In other words, if you have a chain like this:
table ip filter {
chain INPUT {
type filter hook input priority 0; policy accept;
reject with icmp type host-prohibited
}
}There is no way to grant access by creating additional chains. The only way to override that reject rule is by adding additional rules in the same chain.
|
I have a Fedora 31 system on which I am using iptables-nft. I need this because there is still a bunch of software that expects the legacy iptables command line tools. This means that my nftables configuration has the corresponding set of tables to match the legacy configuration:
table ip filter
table ip nat
table ip6 filter
table ip mangle
table ip6 nat
table ip6 mangleI use a containerized VPN service that, prior to nftables, would enable masquerading on my primary ethernet interface by running something like this when the vpn comes up:
iptables -t nat -A POSTROUTING -s 172.16.254.0/24 -o eth0 -j MASQUERADESince upgrading to Fedora 31 and iptables-nft, this no longer works. The container (running alpine) does not have the iptables-nft compatability wrapper, but it does have the nft command itself.
I can't use the nft cli to add rules to the existing tables, because this will break iptables-nft. But I can create new tables. I was hoping I could just apply a configuration like this:
table ip vpn {
chain postrouting {
type nat hook postrouting priority filter; policy accept;
ip saddr 172.16.254.0/24 oifname "eth0" counter masquerade
} chain forward {
type filter hook forward priority filter; policy accept;
ip saddr 172.16.254.0/24 counter accept
}
}...but this doesn't appear to have any impact. By setting the chains in this table to priority 0 I was hoping they would match before the legacy nat table, but that doesn't appear to be the case.
Is there a way to make this work?
| Using custom tables in nft |
I was able to achieve this with the following nftables ruleset (I had to build nft from source as v0.5 which ships with Ubuntu 16.04 doesn't support packet field mangling) :
table ip mytable {
chain prerouting {
type filter hook prerouting priority -300; policy accept;
iifname "eno2.11" ip saddr 192.168.0.222 ip saddr set 192.168.101.222
iifname "eno2.12" ip saddr 192.168.0.222 ip saddr set 192.168.102.222
iifname "eno2.13" ip saddr 192.168.0.222 ip saddr set 192.168.103.222
} chain output {
type filter hook output priority -300; policy accept;
ip daddr 192.168.101.222 ip daddr set 192.168.0.222
ip daddr 192.168.102.222 ip daddr set 192.168.0.222
ip daddr 192.168.103.222 ip daddr set 192.168.0.222
}
}and the following entries in /etc/network/interfaces:
auto eno2 # For switch management interface
iface eno2 inet static
address 192.168.2.2/24auto eno2.11
iface eno2.11 inet static
address 192.168.101.1
netmask 255.255.255.0auto eno2.12
iface eno2.12 inet static
address 192.168.102.1
netmask 255.255.255.0auto eno2.13
iface eno2.13 inet static
address 192.168.103.1
netmask 255.255.255.0This doesn't "unmangle" the source IP of outgoing packets, i.e. the gadgets still see requests from the server as coming from 192.168.101.1, 192.168.102.1 etc rather than 192.168.0.1 - in my application this doesn't matter but it could probably be addressed with additional rules in the output chain.
|
I have a physical network with a Linux server (Ubuntu 16.04, kernel 4.13) and several gadgets on it. Each gadget has the same unchangeable static IP, e.g. 192.168.0.222/24. I would like to communicate with all these gadgets via an arbitrary IP protocol (e.g. ICMP ping or a custom UDP protocol)
Fortunately I have a managed network switch connecting the server and the gadgets. I've configured the switch to have a trunk port for the server and access ports for each gadget, each on a different VLAN (VIDs 11, 12, etc).
I have added 8021q to /etc/modules and set up VLAN entries in /etc/network/interfaces:
auto eno2 # For switch management interface
iface eno2 inet static
address 192.168.2.2/24auto eno2.11 # Gadget 1 (only)
iface eno2 inet static
address 192.168.0.1/24#auto eno2.12 # Gadget 2 - disabled
#iface eno2 inet static
# address 192.168.0.1/24With the entries as shown above, I can communicate with gadget 1 (e.g. ping 192.168.0.222) and don't see any traffic from gadget 2.
But I'd like to be able to communicate with all gadgets at the same time, and be able to distinguish one from the other. They don't need to talk to each other. I was thinking for each gadget I could create a unique host IP and subnet, e.g.
Host IP & subnet "Fake" gadget IP Actual gadget IP VLAN Interface
192.168.101.1/24 192.168.101.222 192.168.0.222 eno2.11
192.168.102.1/24 192.168.102.222 192.168.0.222 eno2.12I'd use iptables or nftables to handle the translation in each direction. Then I could ping 192.168.101.222 to reach gadget 1, and ping 192.168.102.222 to reach gadget 2. From each gadget's point of view, its own IP would still be 192.168.0.222 and it would see the ICMP echo requests coming from 192.168.0.1.
This seems like a somewhat unusual variant on NAT. Note the traffic with the "fake" IPs doesn't need to (and shouldn't) leave the server - we're not forwarding to something else on the network.Is this a reasonable approach to the problem?
How do I set up /etc/network/interfaces and iptables or nftables to achieve this? | nftables / iptables rules to rewrite source IP by interface |
In case someone else stumbles upon the same issue, my main problem was that I was using rules in the incorrect order.
I was adding a drop rule before the accept rule, and this seems to work the other way around.
This is a sample rule for dropping all IP addresses except 2:
ip saddr 1.1.1.1 tcp dport 6379 accept
ip saddr 2.2.2.2 tcp dport 6379 accept
tcp dport 6379 dropComplete rules file:
#!/usr/sbin/nft -fflush rulesettable inet filter {
chain input {
type filter hook input priority 0;
# allow connection to redis from
ip saddr 1.1.1.1 tcp dport 6379 accept
ip saddr 2.2.2.2 tcp dport 6379 accept
tcp dport 6379 drop
}
chain forward {
type filter hook forward priority 0;
}
chain output {
type filter hook output priority 0;
}
} |
I am configuring a REDIS server and I want to allow connections only from a set of specific IP addresses.
This is a Debian 10 server, and the recommended framework to use is nft, which I haven't used in the past.
The default ruleset is this:
#!/usr/sbin/nft -fflush rulesettable inet filter {
chain input {
type filter hook input priority 0;
}
chain forward {
type filter hook forward priority 0;
}
chain output {
type filter hook output priority 0;
}
}What rule do I need to add in that file to allow incoming connections to redis from IP 1.1.1.1 and 2.2.2.2, dropping everything else?
REDIS is using port 6379.
| nftables allow redis only from specific IP addresses |
Your current setup doesn't work simply because forwarding is disabled, despite:# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1because of the type filter hook forward chain set to policy drop: chain forward {
type filter hook forward priority 0; policy drop;
}If you want to restrict forwarding, in case some systems wrongly sets the RPi4 as its gateway, rather than just forwarding everything (by changing above policy from drop to accept or by completely removing the forward chain), you can choose to forward only packets that underwent a dnat translation:
nft add rule inet filter forward ct status dnat acceptThis suffices for all packets of the flow, as this information is stored in the unique conntrack lookup entry that was created (and is used for both directions and every packet).
|
I am trying to set up a port forwarding proxy using a Raspberry Pi 4 with NFTables. I want to duplicate the simple port forwarding capabilities of a cheap home nat router. This a component of a larger remote admin application I am working on.
I can get it to redirect ports on the host itself using redirect. But I cannot get it to forward anything beyond the host.
I have routing enabled. But I would also like it to work from within the lan. I don't think this is a factor.
Looking at journalctl, it appears my rule is getting triggered. But the browser never brings up the page.port 80 is redirecting to a web app running locally on 8088 and this works
port 81 is supposed to forward to the admin screen on a printer
port 82 is trying to forward to an external web site$ curl -i http://192.168.10.32:81
^C (no response)
$Log and config are below.
Update: I failed to mention that the device was initially also running WireGuard. To simplify, I have disabled WireGuard and relisted the config and logs. So it's a pretty vanilla config now.
# nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
ct state invalid drop
iifname "lo" accept
ip protocol icmp accept
tcp dport { ssh, 22222 } ct state new log prefix "[nftables] New SSH Accepted: " accept
tcp dport { http, https, 81, 82, omniorb } accept
pkttype { host, broadcast, multicast } drop
log prefix "[nftables] Input Denied: " flags all counter packets 0 bytes 0 drop
} chain forward {
type filter hook forward priority 0; policy drop;
} chain output {
type filter hook output priority 0; policy accept;
}
}
table ip nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
masquerade
} chain prerouting {
type nat hook prerouting priority -100; policy accept;
tcp dport http log prefix "redirect to 8088 " redirect to :omniorb
tcp dport 81 log prefix "pre redirect to printer " level debug dnat to 192.168.10.10:http
tcp dport 82 log prefix "redirect to web " dnat to 104.21.192.38:http
}
}redirect 80 to 8088 works
forward to printer and web do not workApr 17 13:59:48 douglas kernel: redirect to 8088 IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=20702 DF PROTO=TCP SPT=44984 DPT=80 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:00:50 douglas kernel: pre redirect to printer IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=2569 DF PROTO=TCP SPT=34024 DPT=81 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:00:51 douglas kernel: pre redirect to printer IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=2570 DF PROTO=TCP SPT=34024 DPT=81 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:00:53 douglas kernel: pre redirect to printer IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=2571 DF PROTO=TCP SPT=34024 DPT=81 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:00:59 douglas kernel: redirect to web IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=36328 DF PROTO=TCP SPT=44326 DPT=82 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:01:00 douglas kernel: redirect to web IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=36329 DF PROTO=TCP SPT=44326 DPT=82 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:01:02 douglas kernel: redirect to web IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=36330 DF PROTO=TCP SPT=44326 DPT=82 WINDOW=64240 RES=0x00 SYN URGP=0
Apr 17 14:01:06 douglas kernel: redirect to web IN=eth0 OUT= MAC=dc:a6:32:ab:9c:76:f4:6d:04:63:aa:7d:08:00 SRC=192.168.10.20 DST=192.168.10.32 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=36331 DF PROTO=TCP SPT=44326 DPT=82 WINDOW=64240 RES=0x00 SYN URGP=0 # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:ab:9c:76 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.32/24 brd 192.168.10.255 scope global dynamic noprefixroute eth0
valid_lft 603659sec preferred_lft 528059sec
inet6 fe80::2cd9:f195:bfe6:38e8/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether dc:a6:32:ab:9c:77 brd ff:ff:ff:ff:ff:ff# ip route
default via 192.168.10.1 dev eth0 proto dhcp src 192.168.10.32 metric 202
192.168.10.0/24 dev eth0 proto dhcp scope link src 192.168.10.32 metric 202# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1 | Port forwarding not working with "policy drop" |
Remarks and adjustments (probably caused by obfuscation):I'll assume the WireGuard peer uses 10.8.0.105 instead of 10.8.0.107, to match the nftables ruleset.
233.252.0.0 would cause problems in a simulation (especially on peer) because it's a multicast address. I'll use 192.0.2.233 below (with no network relation to 192.0.2.2).This question is to solve a problem about forwarding done with a dnat rule and rather than just local traffic. There's also a hidden problem with the WireGuard tunnel envelope also fixed at the end.
The behavior of the routing stack with the ping test (including the actual dnat that happened in prerouting and the un-masquerade that will happen for the reply) can be summarized with these two commands that query the kernel about what route will be used:
# ip route get from 192.0.2.2 iif ens19 to 10.8.0.105
10.8.0.105 from 192.0.2.2 dev wg0
cache iif ens19
# ip route get from 10.8.0.105 iif wg0 to 192.0.2.2
192.0.2.2 from 10.8.0.105 via 203.0.113.1 dev ens18
cache iif wg0 Here one can see the reply uses the wrong interface. The address will be rewritten by the conntrack entry's content: still 198.51.100.105 even if it doesn't appear above.
This one is caused by a missing rule: anything that comes (back) from wg0 should use the table subnets. Fixed with:
ip rule add iif wg0 lookup subnetsThis also fixes the case with rp_filter=1 where the first route test above would just fail with RTNETLINK answers: Invalid cross-device link, even if normally one should add the wg0 route in this table too.
giving now:
# ip route get from 10.8.0.105 iif wg0 to 192.0.2.2
192.0.2.2 from 10.8.0.105 via 198.51.100.1 dev ens19 table subnets
cache iif wg0 The ping test will now work correctly.There's an additional somewhat hidden WireGuard envelope routing problem to.
The combination of:not having enabled Strict Reverse Path Forwarding (RFC 3704)
having the peer contact the server first (see the additional issue at the end)
having (at least) the kernel implementation figure out it should reply with the same source it was initially contacted toallows WireGuard to somewhat work, so a ping 10.8.0.1 from peer gets a reply and allows any following WireGuard traffic to continue using the same envelope addresses.
When not stating a source address for a local (non-routed) flow, the routing stack has to figure out which one should be used for the given route. This is especially important for UDP where a socket is often kept unbound (ie: having source 0.0.0.0 aka INADDR_ANY). This is not an issue for a TCP server, as the duplicated established socket created after accept(2) is not bound to 0.0.0.0 anymore but to the correct address: it will then present this address to the routing stack. Here, WireGuard uses UDP with INADDR_ANY. In particular it doesn't bind to 198.51.100.3. That means it presents as source 0.0.0.0 and leaves to the kernel's routing stack the resolution of the outgoing source IP address.
If server's WireGuard had been initiating the very first packet (rather than peer doing it), it would have used 203.0.113.134 instead of 198.51.100.3: the routing stack has no specific ip rule for 0.0.0.0: the ip rule 32765: from 198.51.100.0/24 lookup subnets doesn't match and no special policy routing is applied. In the end, the UDP packet leaves as 203.0.113.134 using ens18.
It appears the kernel implementation at least then continues to use the same address it was queried on. That's not to be relied upon, multi-homing with UDP services requires special support (eg: using IP_PKTINFO) from applications because of this.
Sought outcome for WireGuard:
# ip route get from 198.51.100.105 to 192.0.2.233
192.0.2.233 from 198.51.100.105 via 198.51.100.1 dev ens19 table subnets uid 0
cache Actual outcome at least if it's the first to initiate traffic:
# ip route get from 0.0.0.0 to 192.0.2.233
192.0.2.233 via 203.0.113.1 dev ens18 src 203.0.113.134 uid 0
cache To really fix the WireGuard tunnel multi-homed routing itself, one can use a per-L4-protocol routing rule:
ip rule add iif lo ipproto udp sport 51820 lookup subnets(iif lo is a special syntax to mean locally initiated (non-forwarded) traffic, it's not really about the lo interface).
Giving:
# ip route get from 0.0.0.0 ipproto udp sport 51820 to 192.0.2.233
192.0.2.233 via 198.51.100.1 dev ens19 table subnets src 198.51.100.3 uid 0
cache Despite presenting INADDR_ANY as source, having the UDP source port 51820 now selects the subnets routing table.
|
What I am doing is quite simple, I use port forwarding and enable masquerading on the POSTROUTING chain:
table inet nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
ip daddr 198.51.100.105 counter dnat to 10.8.0.105 comment "host.example.com"
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
counter masquerade
}
}I have following interfaces on my machine:
1: lo
2: ens18
3: ens19
4: wg0The routes are:
default via 203.0.113.1 dev ens18 onlink
10.8.0.0/24 dev wg0 proto kernel scope link src 10.8.0.1
198.51.100.0/24 dev ens19 proto kernel scope link src 198.51.100.3
203.0.113.0/24 dev ens18 proto kernel scope link src 203.0.113.134And the rules are:
0: from all lookup local
32764: from all to 198.51.100.0/24 lookup subnets
32765: from 198.51.100.0/24 lookup subnets
32766: from all lookup main
32767: from all lookup defaultThe output of ip route get 198.51.100.105 is:
local 198.51.100.105 dev lo table local src 198.51.100.3 uid 0
cache <local>The output of ip route show table subnets:
default via 198.51.100.1 dev ens19
198.51.100.0/24 dev ens19 scope link src 198.51.100.3When I now ping the public address 198.51.100.105 from an external VPS or my DSL at home, I can verify that not only the ICMP packet is received, but also replied:
ICMP Request
root@debian:~# tcpdump -i ens19 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens19, link-type EN10MB (Ethernet), snapshot length 262144 bytes
21:35:51.686208 IP 192.0.2.2 > 198.51.100.105: ICMP echo request, id 14855, seq 1, length 64ICMP Reply
root@debian:~# tcpdump -i ens18 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on ens18, link-type EN10MB (Ethernet), snapshot length 262144 bytes
21:35:38.797502 IP 198.51.100.105 > 192.0.2.2: ICMP echo reply, id 5107, seq 1, length 64But as you can see, the response comes from the wrong interface. It should be ens19, but uses ens18 and I don't get why.Why is/are the route/rules not honored? The subnet 198.51.100.0/24 doesn't have anything to do with ens18, it is even a different VLAN.Isn't it possible with nftables to force a different outbounding interface for masqueraded packets?Thanks a lot for any help.
EDIT: The output of ip -br link; ip -4 -br addr; ip -4 route; ip rule; ip rule show table subnets is:
ip -br link; ip -4 -br addr; ip -4 route; ip rule; ip rule show table subnets
lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP>
ens18 UP a6:ea:02:1c:XX:XX <BROADCAST,MULTICAST,UP,LOWER_UP>
ens19 UP ac:71:fe:18:XX:XX <BROADCAST,MULTICAST,UP,LOWER_UP>
wg0 UNKNOWN <POINTOPOINT,NOARP,UP,LOWER_UP>
lo UNKNOWN 127.0.0.1/8
ens18 UP 203.0.113.134/24
ens19 UP 198.51.100.3/24 198.51.100.100/24 198.51.100.101/24 198.51.100.102/24 198.51.100.103/24 198.51.100.104/24 198.51.100.105/24
wg0 UNKNOWN 10.8.0.1/24
default via 203.0.113.1 dev ens18 onlink
10.8.0.0/24 dev wg0 proto kernel scope link src 10.8.0.1
198.51.100.0/24 dev ens19 proto kernel scope link src 198.51.100.3
203.0.113.0/24 dev ens18 proto kernel scope link src 203.0.113.134
0: from all lookup local
32764: from all to 198.51.100.0/24 lookup subnets
32765: from 198.51.100.0/24 lookup subnets
32766: from all lookup main
32767: from all lookup default
32764: from all to 198.51.100.0/24 lookup subnets
32765: from 198.51.100.0/24 lookup subnetsWireGuard
Output of command wg:
interface: wg0
public key: XrSd2TftIpiL3zhXXX=
private key: (hidden)
listening port: 51820peer: gZ89rFX6DvBtdeuYXXX=
endpoint: 233.252.0.0:39126
allowed ips: 10.8.0.0/24
latest handshake: 20 seconds ago
transfer: 3.42 MiB received, 4.08 MiB sentOutput of command systemctl status [emailprotected]:
● [emailprotected] - WireGuard via wg-quick(8) for wg0
Loaded: loaded (/lib/systemd/system/[emailprotected]; enabled; preset: enabled)
Active: active (exited) since Sat 2023-08-05 23:42:48 CEST; 14h ago
Docs: man:wg-quick(8)
man:wg(8)
https://www.wireguard.com/
https://www.wireguard.com/quickstart/
https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8
https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8
Process: 690 ExecStart=/usr/bin/wg-quick up wg0 (code=exited, status=0/SUCCESS)
Main PID: 690 (code=exited, status=0/SUCCESS)
CPU: 27msAug 05 23:42:48 debian systemd[1]: Starting [emailprotected] - WireGuard via wg-quick(8) for wg0...
Aug 05 23:42:48 debian wg-quick[690]: [#] ip link add wg0 type wireguard
Aug 05 23:42:48 debian wg-quick[690]: [#] wg setconf wg0 /dev/fd/63
Aug 05 23:42:48 debian wg-quick[690]: [#] ip -4 address add 10.8.0.1/24 dev wg0
Aug 05 23:42:48 debian wg-quick[690]: [#] ip link set mtu 1420 up dev wg0
Aug 05 23:42:48 debian systemd[1]: Finished [emailprotected] - WireGuard via wg-quick(8) for wg0.Server: /etc/wireguard/wg0.conf
[Interface]
Address = 10.8.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey = 8BspU4XXX=[Peer]
PublicKey = gZ89rFX6DvBtdeuYXXX=
AllowedIPs = 10.8.0.0/24
Endpoint = 233.252.0.0:58642Peer: /etc/wireguard/wg0.conf
[Interface]
# Client Private Key
PrivateKey = iO00+qQDXXX=
Address = 10.8.0.107/24[Peer]
# Server Public Key
PublicKey = XrSd2TftIpiL3zhXXX=
AllowedIPs = 10.8.0.0/24
PersistentKeepalive = 25
Endpoint = 198.51.100.3:51820 | nftables, masquerade: packets go through wrong outbounding interface |
From nftables wiki:Since June 2018, the old xtables/setsockopt tools are considered legacy. However, there is support to use the iptables/ip6tables/arptables/ebtables old syntax with the nf_tables kernel backend. This is described with further details in the Legacy xtables tools wiki page.You can use iptables-nft to achieve your goal like below:
iptables-nft -I INPUT 1 -i eth0 -p tcp -s 192.168.178.20 --dport 8201 -j REJECTAfter that entering, iptabels-nft-save will apply your rules. You can confirm that this rule exists in nftables by:
nft list rulesetYou have to see something like this:
nft list ruleset
table ip filter {
chain INPUT {
type filter hook input priority filter; policy accept;
iifname "eth0" ip saddr 192.168.178.20 tcp dport 8201 counter packets 0 bytes 0 reject
}
}In addition you can directly translate your rules using the below syntax:
iptables-translate -I INPUT 1 -i eth0 -p tcp -s 192.168.178.20 --dport 8201 -j REJECTwhich will give you the following output:
nft insert rule ip filter INPUT iifname "eth0" ip saddr 192.168.178.20 tcp dport 8201 counter rejectAlso, you can just save all of your iptables rules like iptables-save > save.txt and then use iptables-restore-translate -f save.txt to get the translated rules.
Take a look at my own question a few months back for further explanation.
|
I have this rule for iptables
iptables -I INPUT 1 -i eth0 -p tcp -s 192.168.178.20 --dport 8201 -j REJECTI was looking for how to translate rules tutorials but couldn't find them.
How do I create that rule with nftables? Or is the syntax the same?
| Translate firewall rule from iptables to nftables |
Actually nftables's egress hook was added in kernel 5.16, and improved support (fwd) in 5.17.
There were several attempts earlier, and one of them was NACK-ed at the same time it was initially committed, making it appear in Kernel Newbies for version 5.7, and apparently even nftables' wiki has it wrong by linking to Kernelnewbies for Linux 5.7 instead of Linux 5.16.
Here is a relevant mailing list link from March 2020 (around kernel 5.7):Subject: Re: [PATCH 00/29] Netfilter updates for net-next
From: David Miller <davem () davemloft ! net>
From: Alexei Starovoitov <[emailprotected]>
Date: Tue, 17 Mar 2020 20:55:46 -1000On Tue, Mar 17, 2020 at 2:42 PM Pablo Neira Ayuso [emailprotected] wrote:Add new egress hook, from Lukas Wunner.NACKed-by: Alexei Starovoitov [emailprotected]Sorry I just saw this after pushing this pull request back out.
Please someone deal with this via a revert or similar.It was subsequently reverted in this commit: netfilter: revert introduction of egress hook. This revert might not have been posted in all relevant mailing lists, adding a bit to the confusion.Fast forward almost two years. Issues and concerns (among them about interactions with tc/qdisc) having been addressed, egress was added again in kernel 5.16 (9 Jan 2022). Kernelnewbies for Linux 5.16 has this entry:NetfilterSupport classifying packets with netfilter on egress commit, commit, commit, commitOn Linux Kernel Driver Database:CONFIG_NETFILTER_EGRESS: Netfilter egress support
[...]found in Linux kernels: 5.16–5.19, 5.19+HEADLikewise, nftables userland support for egress was officially added only after kernel support was committed in nf-next (so a bit before 5.16 was out) and was made available in the nftables 1.0.1 release:This release contains new features available up to the Linux kernel
5.16-rc1 release:
[...]egress hook support (available since 5.16-rc1).OP's ruleset is accepted on a kernel with relevant kernel option CONFIG_NETFILTER_EGRESS which has to be version >= 5.16 along nftables >= 1.0.1.
|
I'm looking to apply firewall rules on egress to control DHCP output from a Docker container. I don't want the DHCP container to share the host's network stack as adding CAP_NET_ADMIN effectively gives the container control of the network stack.
I notice here that an egress hook was added to netfilter in kernel 5.7 (uname -r says I have 5.10).
According to information in this commit, I have added the following table:
table netdev filterfinal_lan {
chain egress {
type filter hook egress device enp1s0 priority 0; policy accept;
}
}However when I attempt to apply the config it tells me it's not recognised:
/etc/nftables.conf:107:20-25: Error: unknown chain hook
type filter hook egress device enp1s0 priority 0; policy accept;
^^^^^^I'm unsure which version of nftables supports the egress hook, but my nft --version is nftables v0.9.8 (E.D.S.). Information on the egress hook seems quite elusive.
What is required to enable the use of this hook?
| NFTables Egress Hook? |
Nov* 8 09:37:12 kernel: [10967.520783] New Input packets: IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=192.168.1.2 DST=192.168.1.2 LEN=85 TOS=0x00 PREC=0xC0 TTL=64 ID=6855 PROTO=ICMP TYPE=3 CODE=1 [SRC=192.168.1.2 DST=192.168.1.1 LEN=57 TOS=0x00 PREC=0x00 TTL=64 ID=60616 DF PROTO=UDP SPT=49662 DPT=53 LEN=37 ][10967.520783] is the kernel uptime in seconds when the message was created. Originally this was the first part of the kernel log message. But the message seems to have been processed by something else (maybe a syslog daemon?) that has prefixed it with a human-readable timestamp and kernel: indicating that this message was logged by the OS kernel, not by any application or service.
The packet described by this log message came to Netfilter through the loopback interface (IN=lo), so there is no real Ethernet layer involved, and so source and destination MAC addresses are all zeroes. The 08:00 at the end of the MAC= string is probably the EtherType, indicating that the "payload" of the low-level packet contains an IPv4 packet.
The source and destination IP addresses are both 192.168.1.2, so this packet seems to have been generated locally on host 192.168.1.2. Within the payload of the IPv4 packet, there is an ICMP packet of Type 3, Code 1 - that is, a "Host unreachable" error packet.
An error message is meaningless if you cannot figure out what caused it to be sent, so an ICMP error packet contains the headers of the original packet which caused the error to be detected. These headers are decoded within the square brackets:
[SRC=192.168.1.2 DST=192.168.1.1 LEN=57 TOS=0x00 PREC=0x00 TTL=64 ID=60616 DF PROTO=UDP SPT=49662 DPT=53 LEN=37 ]So, the packet that caused the Host unreachable error message was originated in this host (192.168.1.2) and its destination was 192.168.1.1. The protocol was UDP and the destination UDP port was 53, the standard DNS port. So, this host apparently has some configuration (either manually or by DHCP) that tells it to use 192.168.1.1 as a DNS server. But as it was trying to send an UDP packet to the DNS server at 192.168.1.1, something went wrong. The kernel may have detected that the network connection was lost, or the kernel tried to make an ARP request to find the MAC address of 192.168.1.1 but got no response. And so the kernel generated the ICMP error packet and sent it locally through the loopback interface.Nov* 8 09:38:13 kernel: [11029.272652] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2The timestamps and kernel: are explained the same as in the first message.
The packet described by this message came in through the wlo1 wireless network interface. Assuming that the MAC= string is just the first 14 bytes from the beginning of the Layer-2 Ethernet packet, the destination MAC address (= presumably the MAC address of this host) would be b8:81:98:cb:ef:a8. According to one MAC address lookup website, this MAC would belong to a network adapter (or other device) manufactured by Intel.
The source MAC address would be 5c:77:77:6e:0d:7b. The vendor lookup failed to tell anything about this address.
Both MAC addresses are regular, globally unique unicast MAC addresses. That may be surprising as the packet contains a multicast IP address. This might be caused by how multicast messages are handled in Wi-Fi networks.
The 08:00 is again Ethertype, indicating plain old IPv4.
The destination IP address 224.0.0.1 is the standard "all-hosts" multicast address. Because it would not make sense to send packets to all multicast-capable systems in the whole internet, the TTL=1 restricts this multicast to all hosts within a single subnet only.
PROTO=2 indicates this is an Internet Group Management Protocol (IGMP) packet: these are used by multicast routers and multicast-capable systems to find out which multicast groups each multicast-capable systems wants to be part of. (Every multicast-capable host is always part of the all-hosts group.) The IGMP data is not decoded in this message, but because the IP packet length is just 32 octets (LEN=32), this is most likely a IGMPv2 general membership query packet.
Basically, a multicast-capable router is asking for all multicast-capable systems to report if they want to receive any multicast traffic.
|
What happened during these logs entry events below?
Nov* 8 09:37:12 kernel: [10967.520783] New Input packets: IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=192.168.1.2 DST=192.168.1.2 LEN=85 TOS=0x00 PREC=0xC0 TTL=64 ID=6855 PROTO=ICMP TYPE=3 CODE=1 [SRC=192.168.1.2 DST=192.168.1.1 LEN=57 TOS=0x00 PREC=0x00 TTL=64 ID=60616 DF PROTO=UDP SPT=49662 DPT=53 LEN=37 ]Nov* 8 09:38:13 kernel: [11029.272652] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=21.In first example why squre brackets is used?Numbers in squre brackets have the different meaning, why.What is the meaning of 08:00 in the end of MAC address?In second example what is the role of multicast address and 0.0.0.0 address.Why TTL=1Thank you!
| Understanding nftables logs |
Yes the usual wildcard character * is nft's equivalent of iptables' +. The man page was not documenting this (relatively old) feature until recently.
From a recent enough nft man page:Like with iptables, wildcard matching on interface name prefixes is
available for iifname and oifname matches by appending an asterisk (*)
character. Note however that unlike iptables, nftables does not accept
interface names consisting of the wildcard character only - users are
supposed to just skip those always matching expressions. In order to
match on literal asterisk character, one may escape it using backslash
(\).eg:
iifname "veth*" acceptAlso, it doesn't appear possible today to use a set of interfaces (type ifname or typeof iifname) with a wildcard in an element.
UPDATE: this 2022-04 git commit in nftables along a few other related commits, possibly along adequate kernel support, finally allows to use interface wildcards in sets, using intervals as method:Allows to interface names in interval sets:
table inet filter {
set s {
type ifname
flags interval
elements = { eth*, foo }
}This should become available in the release following current 1.0.2.A completely different method to manage interfaces dynamically appearing and disappearing, would be to tag them with a group just after they are created (and probably before they are brought up) and have nftables rules matching an interface's group. eg:
ip link set ppp7 group 42while having an nftables rule reusing this meta information:
iifgroup 42 jump from-ppp |
In iptables it was possible to use eth+ as interface name to match any interface starting with eth (or as a more practical example this also worked to match virbr+ and veth+, i.e. the + seemed to have the effect of a * in shell globs).
Does nftables offer a similar facility to match interface names? How do I use it?
| Is there a sort of wildcards in nftables? |
Your ruleset is correct. But your nftables version is slightly too old. Here's the announce including your example:
[ANNOUNCE] nftables 0.9.1 release:Hi!
The Netfilter project proudly presents:
nftables 0.9.1This release contains fixes and new features, available up with Linux
kernels >= 5.2.[...]ARP sender and target IPv4 address matching, eg.
table arp x {
chain y {
type filter hook input priority filter; policy accept;
arp saddr ip 192.168.2.1 counter packets 1 bytes 46
}
}this updates rule counters for ARP packets originated from the
192.168.2.1 address.So you might need kernel >= 5.2 (that's unclear if required) but do need nftables >= 0.9.1
For the kernel: https://www.raspberrypi.org/software/operating-systems/ shows that Raspberry Pi OS currently comes with kernel 5.10.x so this is a moot point.
For the nftables version, while it's usually not recommended, you can try and use buster-backports to get a newer version of nftables, currently 0.9.6. If you find this is not suitable for RPi, you should instead recompile from (Debian) sources your own backported package.
Note: the wiki lags a bit and might not always be completely accurate. Usually the man page is more accurate, once the feature exists of course. Eg:
buster's version 0.9.0:ARP HEADER EXPRESSION
arp [ARP header field]versus buster-backports' version 0.9.6:ARP HEADER EXPRESSION
arp {htype | ptype | hlen | plen | operation | saddr { ip | ether } | daddr { ip | ether }Workaround for simple cases (and additional difficulty)
If you really can't change nftables for simple cases like this one, it's possible to use a raw payload expression instead, with good knowledge of the ARP protocol, keeping in mind that this protocol is intended to be used on more than just Ethernet and IPv4 and thus has a few generic parts that are always constants in usual IPv4 over Ethernet use (eg: hlen=6, plen=4).
Anyway I cheated and just read back a working ruleset with nftables 0.9.0 to get it displayed as raw payload, and translated the decimal output in hex (except payload offset and length):
table arp filter {
chain input {
type filter hook input priority 0; policy drop;
@nh,112,32 0xc0a80201 counter accept
} chain output {
type filter hook input priority 0; policy accept;
}
}which is easy to read using Wikipedia's link:offset 112 (in bits) is offset 14 in bytes: Sender Protocol Address aka saddr ip.length 32 bits: An IPv4 address length
0xc0a80201 means 192.168.2.1 |
I wanted to add the rule arp saddr ip 192.168.2.1 counter accept to my Nftables firewall. When reading the config-file with sudo nft -f /etc/nftables2.conf, I get the error-message
/etc/nftables2.conf:26:21-15: Error: syntax error, unexpected saddr
arp saddr ip 192.168.2.1 counter accept
^^^^^^Table in question:
table arp filter {
chain input {
type filter hook input priority 0; policy drop;
arp saddr ip 192.168.2.1 counter accept
}
chain output {
type filter hook input priority 0; policy accept;
}
}I can´t get it fixed. At first I tried different IPs, then I tried the same with saddr ether <MAC of device> instead of the IP address but got the same result.
I use nftables version 0.9.0 on the newest Raspberry Pi OS.
Can someone please point out where my mistake lies? I´m kinda lost..
Thanks for your time.
| nftables Error: syntax error, unexpected saddr |
No, the explanation is much simpler: the counter statement has optional arguments packets and bytes which display the number of packets and bytes counted by the counter when a packet reached the rule where it was. Without filter before the counter, any packet (including on loopback) will thus increase the values so it can happen very early and fast. The tool doing conversion saw an iptables default counter and chose to also translate its values for fidelity.
So usually when you write a rule you don't set those values, you put a simple counter alone, and they both get a default of 0. When packets traverse it those values increase. Optionally, especially when used in a named counter stateful object where it can even be displayed-and-reset (using something like nft reset counters), to do some form of accounting, one can set those values when writing the ruleset: usually when restoring the ruleset saved right before reboot. This can only be reset if used as the named variant, not as "inlined" anonymous counter. They cannot be used to alter the match in a rule, there's no other option than to display them.
Any wifi problem you have cannot be caused by any counter statement.
If now you want to use packets count to limit the usage of rules in a nftables firewall, there are a few different methods depending on needs:the other stateful object quota (again which can be used anonymously, but can only be reset if used named). You can then have for example a rule never match or start matching depending on its count.
there's the limit statement to count rates. For example if you fear a log rule could flood log files, you can use this to limit the number of logs done. It can also limit the rate to some ressource (usually used with other filters with conntrack).
with a recent enough nftables (0.9.2?) and kernel (4.18?), there's the ct count conntrack expression to count the number of established connection using conntrack, usually to limit concurrent access to some ressource (ssh, web server...) |
A few months ago, I migrated the firewall of a debian laptop from iptables to nftables, using debian's recommended procedure, and all seems to have been fine. Now, months later, I'm scrutinizing the rule-set created by that migration procedure, trying to learn the nftables syntax, and see what seem to be several counter-based rules that I don't understand and suspect might not be correct. I haven't found the nftables wiki to be a helpful educational resource, and haven't found any other on-line educational resource that addresses this kind of question:
The default auto-migrated rule-set included the following:
table inet filter {
chain INPUT {
type filter hook input priority 0; policy drop;
counter packets 123 bytes 105891 jump ufw-before-logging-input
counter packets 123 bytes 105891 jump ufw-before-input
counter packets 0 bytes 0 jump ufw-after-input
counter packets 0 bytes 0 jump ufw-after-logging-input
counter packets 0 bytes 0 jump ufw-reject-input
counter packets 0 bytes 0 jump ufw-track-input
}The first two counter statements are examples of what caught my eye. Am I correct that they are saying "jump to the rules in section ufw-before-foo, but only after the first 123 packets and the first 105891 bytes have been received".Why not start immediately from packet 0 byte 0?
Why not use a syntax >= which seems to supported by nftables?
Are these numbers arbitrary? Possibly due to a glitch in the migration?The above rule-set includes a jump to the following chain, with a possibly similar issue. Here's a snippet of it:
chain ufw-before-input {
iifname "lo" counter packets 26 bytes 3011 accept
ct state related,established counter packets 64 bytes 63272 accept
ct state invalid counter packets 0 bytes 0 jump ufw-logging-deny
ct state invalid counter packets 0 bytes 0 drop
...
}Why are the decisions to accept based upon the receiving 26 or 64 prior packets?
A firewall can be flushed arbitrarily at any time after power-up and network discovery/connection, so why drop all those initial packets?As I mentioned above, these rules have been in place for months, so I'm wondering what negative effect they could possibly have had. The only candidate that I've come up with is that the laptop can sometimes have a difficult time making a wifi connection (especially after resuming from sleep) while a second nearby laptop has no such trouble.Could these rules dropping packets be the culprit for difficulty negotiating a wifi connection? | firewall: nftable counter rules |
The wiki says what you tried is not yet implemented: You have to obtain the handle to delete a rule. The example is:
$ sudo nft -a list table inet filter
table inet filter {
...
chain output {
type filter hook output priority 0;
ip daddr 192.168.1.1 counter packets 1 bytes 84 # handle 5
}
}The -a shows the assigned handle "5" as a comment, so you can
$ sudo nft delete rule filter output handle 5 |
Current system:Distro: Ubuntu 20.04
kernel: 5.4.0-124-generic
nft: nftables v0.9.3 (Topsy)I am new and learning nftables, Here is my nft ruleset currently:
$sudo nft list ruleset taxmd-dh016d-02: Wed Sep 21 12:09:08 2022table inet filter {
chain input {
type filter hook input priority filter; policy accept;
} chain forward {
type filter hook forward priority filter; policy accept;
} chain output {
type filter hook output priority filter; policy accept;
ip daddr 192.168.0.1 drop
}
}I want to delete ip daddr 192.168.0.1 drop from the output chain. I tried the following:
sudo nft del rule inet filter output ip daddr 192.168.0.1 drop
sudo nft delete rule inet filter output ip daddr
sudo nft 'delete element ip daddr 192.168.0.1 drop'
sudo nft 'delete element ip'
sudo nft delete rule filter output ip daddr 192.168.0.1 dropBut nothing works, I keep getting this error:
Error: syntax error, unexpected inet
delete inet filter chain output ip daddr 192.168.0.1 drop
^^^^Why can't I delete a specific element? I would think this would be straight forward, but I am missing something.
| How do I delete a specific element in a chain in nftables? |
This is a case of NAT loopback handling.
Currently the WireGuard server ("WGS") redirects only from eth0, so nothing will happen with traffic received from wg0.
On should redirect on WGS incoming traffic to ports 80,443 intended for its own public IP address. But there's a catch: how to guess at a pre-routing step that the packet will be classified as local without knowing in the rule this local destination IP address , while this classification happens at the routing decision step which didn't yet happen? (See this schematic for a summary of these various steps in the life of a packet).
One can ask the kernel, dynamically in the packet path, how it would route a packet by using nftables' fib expression (requires kernel >= 4.10):FIB EXPRESSIONS
fib {saddr | daddr | mark | iif | oif} [. ...] {oif | oifname | type}A fib expression queries the fib (forwarding information base) to
obtain information such as the output interface index a particular
address would use. The input is a tuple of elements that is used as
input to the fib lookup functions.This can be performed before the routing step and used to do the redirection only if the packet is tentatively classified as a local destination: to an address belonging to WGS. As this is done in prerouting, once the actual routing decision will happen, it won't be classified as a local packet anymore but as a routed packet (back to sender).
To redirect traffic received from wg0 (thus from the Web Server ("WS")) sent to any address belonging to WGS:
nft add rule ip firewall prerouting iif wg0 tcp dport '{ 80, 443 }' fib daddr type local dnat to 10.0.0.2 If needed (eg: to still be able to access a private management interface listening on WGS' wg0 address only) it can be further filtered so only a destination local to WGS but not being wg0's 10.0.0.1 will match by using this instead:
nft add rule ip firewall prerouting iif wg0 daddr != 10.0.0.1 tcp dport '{ 80, 443 }' fib daddr type local dnat to 10.0.0.2As the nat/postrouting chain already masquerades anything in the 10.0.0.0/24 source range, there's no additional step needed: WS's source address will be replaced by WGS's address on wg0. NAT loopback requires the source to be changed (WS can't accept a packet coming from its own IP address).
Optionally a dedicated rule could be inserted before this masquerade rule to choose an other IP address to snat to instead. Any address would do (instead of the resulting source 10.0.0.1), be it local to WGS or not as long as on WS this address is routed through WGS. As WS's and WGS' routing table weren't provided, the example below might be wrong. Pick the non-existing 10.0.1.2 address to replace WS's own 10.0.0.2, still allowing WS's own logs to easily identify requests coming from itself:
nft insert ip firewall postrouting ip saddr 10.0.0.2 ip daddr 10.0.0.2 snat to 10.0.1.2or to be overly conservative:
nft insert ip firewall postrouting ip saddr 10.0.0.2 ip daddr 10.0.0.2 tcp dport '{ 80, 443 }' ct status dnat oif wg0 snat to 10.0.1.2 |
I have a WireGuard server as the edge router. Forwarding all http traffic to my web server.
Everything works fine but there is a problem. The web server cannot access itself through the WireGuard public IP address. On the web server computer, I cannot use the web browser to access my website.
I found that daddr based nat can resolve this issue but I would like to know if there any better method because IP address may vary but iif is fixed. My Netgear WiFi router can port forwarding without this kind of problem. But I cannot check its internal rule, also I don't think it uses daddr based nat.
Here are the config of the WireGuard server.
wg0
interface: wg0
Address = 10.0.0.1/24
public key: (hidden)
private key: (hidden)
listening port: 51820
peer: (hidden)
endpoint: (hidden):51820
allowed ips: 10.0.0.2/32
latest handshake: 56 seconds ago
transfer: 20.69 MiB received, 115.85 MiB sentnftables
table ip firewall {
chain input {
type filter hook input priority filter; policy drop;
ct state established,related accept
udp dport {51820} accept
tcp dport {22} accept
ip saddr 10.0.0.0/24 accept
}
chain prerouting {
type nat hook prerouting priority dstnat;
iif eth0 tcp dport {80,443} dnat to 10.0.0.2
}
chain postrouting {
type nat hook postrouting priority srcnat;
ip saddr 10.0.0.0/24 masquerade
}
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
ct status dnat accept
ip saddr 10.0.0.0/24 accept
}
}Here are the config of the Web server.
wg0
interface: wg0
Address = 10.0.0.2/24
public key: (hidden)
private key: (hidden)
listening port: 51820
fwmark: 0xca6c
peer: (hidden)
endpoint: (hidden):51820
allowed ips: 0.0.0.0/0
latest handshake: 25 seconds ago
transfer: 114.69 MiB received, 3.56 MiB sent
persistent keepalive: every 25 seconds | Web server cannot access itself through public ip address on nftables port forwarding |
This feature also requires kernel >= 5.5 for adequate netfilter support. Description in kernelnewbies.org:Linux 5.5 was released on 26 Jan 2020
[...]netfilterSupport iif matches in POSTROUTING commitFrom the commit:netfilter: Support iif matches in POSTROUTING
Instead of generally
passing NULL to NF_HOOK_COND() for input device, pass skb->dev which
contains input device for routed skbs.
Note that iptables (both legacy and nft) reject rules with input
interface match from being added to POSTROUTING chains, but nftables
allows this.From the description, before this commit, the input interface was not provided by netfilter and the iif expression never matched.
|
There is only one table on the server - "nat" and it contains only two chains: "prerouting" and "postrouting". IP forwarding is enabled. I'm trying to set more specific conditions for the source nat rule. When I set the classic rule :
nft add rule nat postrouting ip saddr 192.168.1.0/24 oif eth0 snat 1.2.3.4everything works fine. But I'd like to specify also the interface where it's located for the network "saddr 192.168.1.0/24".
nft add rule nat postrouting **iif eth1** ip saddr 192.168.1.0/24 oif eth0 snat 1.2.3.4When I enter this command, the program accepts it and the rule appears in the table. But the traffic doesn't go. Does anyone have any idea why?
| How to set hard conditions for source nat rules in nftables? |
When a packet is emitted, there's a routing decision made: this decision chooses the outgoing interface and the matching source IP address to use.
When the route/output chain sets a mark, it triggers a reroute check, as seen in this schematic (which was made for iptables in mind but is totally usable for nftables). The reroute check alters the route... but doesn't change the source IP address. So more work has to be done.add a NAT rule to alter the source IP address.This has to be done after the reroute check, so it's done in nat/postrouting. Note that the same table can have different chain types (contrary to iptables where table <=> type).
nft add chain ip test testnat '{ type nat hook postrouting priority srcnat; policy accept; }'
nft add rule ip test testnat meta mark 1 masqueradeNow the correct packet really leaves.and allow reply flow to be accepted despite not arriving through the default routeYou can either relax Reverse Path Forwarding to Loose mode by changing rp_filter:
sysctl -w net.ipv4.conf.wlo1.rp_filter=2or mark reply flow to use the same routing table as outgoing flow. It won't really improve security but anyway:
nft add chain ip test testpre '{ type filter hook prerouting priority mangle; policy accept; }'
nft add rule ip test testpre iif "wlo1" meta mark set 1Alas, this won't work without yet-an-other-tweak which is needed since a new undocumented feature appeared in 2010:
sysctl -w net.ipv4.conf.wlo1.src_valid_mark=1Notes:It's possible to do better security-wise by storing the mark in conntrack's connmark (ct mark) to only allow the correct reply flows and not anything else to bypass the Strict Reverse Path Forwarding. In Strict mode, nothing will reach through wlo1 anymore unless it's a reply from outgoing traffic. Here's the full corresponding nftables rule file for this (to be used with option 2. above while replacing nftables rules):
table ip test {
chain test {
type route hook output priority mangle; policy accept;
meta cgroup 1234 meta mark set 1 ct mark set meta mark
} chain testnat {
type nat hook postrouting priority srcnat; policy accept;
meta mark 1 masquerade
} chain testpre {
type filter hook prerouting priority mangle; policy accept;
ct mark 1 meta mark set ct mark
}
}Also, the mark reroute check might interfere with correct PMTU / TCP MSS in some cases, according to what I could read there in the uidrange entry: https://kernelnewbies.org/Linux_4.10#NetworkingIf you can convert the cgroups usage into a limited set of uids instead, then this can be done correctly using only the routing stack, without netfilter or nftables, using ip rule add ... uidrange ... as described above. See my answer there about it: Routing traffic for a user through specific interface (tum1). |
I'm trying to get some traffic to go through a VPN and other traffic to not do so.
Packets with a given fwmark are supposed to go to my default interface (wlo1), and all other traffic to a tunnel interface (tun0, using OpenVPN) in the main table. I have added this rule in nftables:
table ip test {
chain test {
type route hook output priority mangle; policy accept;
meta cgroup 1234 meta mark set 1
}
}In the main routing table I have these entries:
default via 10.11.0.1 dev tun0
10.11.0.0/16 dev tun0 proto kernel scope link src 10.11.0.20
[WANIP.0]/24 dev wlo1 proto kernel scope link src [WANIP]
128.0.0.0/1 via 10.11.0.1 dev tun0
[VPNIP] via [WANGATEWAY] dev wlo1Packets with fwmark 1 is instead led to their own routing table: ip rule add from all fwmark 1 lookup test. In the test table I have added the following route:
default via [WANGATEWAY] dev wlo1When I run ping 8.8.8.8 from this cgroup, it's stuck. It seems able to send but not receive any packets.
VPN traffic works as expected.
What exactly is going on?
| Route traffic in a cgroup outside a VPN tunnel |
By searching in the source for version 0.9.0:static const struct datatype *datatypes[TYPE_MAX + 1] = {[...] [TYPE_IFINDEX] = &ifindex_type,[...] [TYPE_IFNAME] = &ifname_type,
};and then looking for where they are defined:const struct datatype ifindex_type = {
.type = TYPE_IFINDEX,
.name = "iface_index",
.desc = "network interface index",[...]const struct datatype ifname_type = {
.type = TYPE_IFNAME,
.name = "ifname",
.desc = "network interface name",it's possible to find that the needed types are iface_index (eg: iif lo) and ifname (eg iifname "lo"), even if (as of writing this answer) it's undocumented in nft's man page.
nft add set inet filter if-index-set '{ type iface_index; }'
nft add set inet filter if-name-set '{ type ifname; }'nft add element inet filter if-index-set '{ lo }'
nft add element inet filter if-name-set '{ lo }'nft add chain inet filter input '{ type filter hook input priority 0; }'
nft add rule inet filter input iif @if-index-set counter
nft add rule inet filter input iifname @if-name-set counterNote that ifname wildcard support in a set has been added on 2022-05-31 in nftables 1.0.3. A named set must use flags intervals and the final wildcard character is * (not + like in iptables).
|
Using sets in nftables is really cool. I am currently using a lot of statements like these in my nftables.conf rulesets:
iifname {clients0, dockernet} oifname wan0 accept \
comment "Allow clients and Docker containers to reach the internet"In the rule above {clients0, dockernet} is an anonymous (inline) set of interfaces. Instead of repetition in the rules over and over, I'd like to define a set of interfaces at the top of the file, called a named set in nftables. The manpage (Debian Buster) shows how to do that for several types of sets: ipv4_addr, ipv6_addr, ether_addr, inet_proto, inet_service and mark. However, it seems it's not available for interfaces by name or simple primitive type such as strings.
I've the approach below, but this does not work with the errors given:Omitting the type:
table inet filter {
set myset {
elements = {
clients0,
dockernet,
}
}
[...]
}Result: Error: set definition does not specify key.Using the string type:
table inet filter {
type string;
set myset {
elements = {
clients0,
dockernet,
}
}
[...]
}Result: Error: unqualified key type string specified in set definition.Is there really no way of naming the anonymous set I've shown on the top?
| How do I create a named set of interfaces by name in nftables? |
In addition to the Linux network stack, there's an additional "stack" intended to alter behavior but that is kept as much as possible separate from the network stack: Netfilter. This facility allows clients (still in thekernel) to hook into it to intercept packets at various strategic places during its life in the network stack: that's used for firewalling and doing NAT. Among those clients but not only, there are iptables and nftables. Here's the corresponding Packet flow in Netfilter and General Networking schematic:By Jan Engelhardt - Own work, Origin SVG PNG, CC BY-SA 3.0, Link
While this schematic was created with iptables in mind, it's also applicable to nftables. In this answer, nftables has the same role as iptables. Most things said here about nftables is applicable to iptables (legacy or nft version) and vice-versa.
As can be seen, another important client (really considered to be part of Netfilter) is conntrack, which keeps track of all flows seen, and is also doing the bulk of NAT handling using the same flow bookkeeping. NAT is not really handled by nftables: as written in the center of the schematic, it will receive only the first packet of each flow to be able to give an altering rule that will determine the fate of the whole flow (rather than the single packet): once the information is stored in the conntrack lookup entry, conntrack will handle it standalone, and chains of hook type nat will not see following packets from this flow.
So it doesn't matter: once a flow is already handled by conntrack, it won't be affected by any change in nat rules, even if there are no more rules, simply because those rules are not used anymore.
What can be done to affect this flow is to query directly the conntrack facility. The relevant tool for this is called conntrack (from https://conntrack-tools.netfilter.org/).
As OP wrote, it can be used to read information about conntrack entry, but can also be used to update, create, delete or flush (delete all) entries.
One can choose the granularity:remove all entries:
conntrack -Fremove all snat-ed entries (technically meaning that the reply destination address is not the same as the original source address in the lookup table):
conntrack -D --src-natall the way to a surgical removal of OP's entry specifying all elements even including the precise ICMP id (so this wouldn't disrupt an identical ping command also started before the NAT rules were removed, since the id would most certainly be different):
# conntrack -D -p icmp --orig-src 192.168.2.100 --orig-dst 8.8.8.8 --reply-src 8.8.8.8 --reply-dst 192.168.1.10 --icmp-type 8 --icmp-code 0 --icmp-id 13006
icmp 1 28 src=192.168.2.100 dst=8.8.8.8 type=8 code=0 id=13006 src=8.8.8.8 dst=192.168.1.10 type=0 code=0 id=13006 mark=0 use=1
conntrack v1.4.6 (conntrack-tools): 1 flow entries have been deleted.(there's no reason Ubuntu 18.04LTS's conntrack 1.4.4 behaves differently here)Once this entry is removed, the next packet that was previously part of this flow will be seen as a new packet. Being in state NEW, it will be given again a chance to be altered and will traverse the nat chains. As there will be no more alteration, if it was egress it will traverse un-NATed to the next router which will probably drop it or pass it to a router which will drop it (because of Strict Reverse Path Forwarding or for specific routing rules for an RFC1918 address), if it was a late ingress reply from 8.8.8.8, it will be locally routed and ignored by the network stack: the ping command is disrupted and now times out.
|
I have two virtual machines (router and client) ubuntu server 18 04, the router is the gateway for the client.Nftables are installed on the router and a NAT table is created with two chains pre and postrouting with a rule for the client.
table ip nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
} chain postrouting {
type nat hook postrouting priority 100; policy accept;
oif "enp0s3" ip saddr 192.168.2.100 snat to 192.168.1.10 comment "STATIC" # handle 4
}
}I turn on the ping from the client to the Internet - everything works well.
root@router:/home/router# conntrack -L -n
icmp 1 29 src=192.168.2.100 dst=8.8.8.8 type=8 code=0 id=13006 src=8.8.8.8 dst=192.168.1.10 type=0 code=0 id=13006 mark=0 use=1but when you delete the rule, the session is not interrupted and the ping continues to go:
root@router:/home/router# nft delete rule nat postrouting handle 4
root@router:/home/router# nft list ruleset
table ip nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
} chain postrouting {
type nat hook postrouting priority 100; policy accept;
}
}
root@router:/home/router# conntrack -L -n
icmp 1 29 src=192.168.2.100 dst=8.8.8.8 type=8 code=0 id=13006 src=8.8.8.8 dst=192.168.1.10 type=0 code=0 id=13006 mark=0 use=1
conntrack v1.4.4 (conntrack-tools): 1 flow entries have been shown.I would like that when a rule is changed or deleted the current session from the client is terminated.
how can i get it?
| How to reset sessions in nat table? |
The order of the rules is important: if an earlier rule matches a packet and says that it should be accepted, a later rule cannot override that decision. You must either take care to insert the rule blocking the traffic before any rule that will accept it, or delete a previous rule that is currently accepting the traffic, if applicable.
By default, nft add will add a new rule to the tail end of the specified rule chain, unless you explicitly specify that the rule is to be inserted after a specific existing rule. To add rules to the beginning of the chain, before any existing rule, you would need to use nft insert instead.
|
I want to close port 80 in localhost.
sudo nft add rule inet filter input tcp dport 80 dropTo check with nmap:
sudo nmap -p 80 127.0.0.1
Starting Nmap 7.70 ( https://nmap.org ) at 2021-05-02 05:16 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00010s latency).PORT STATE SERVICE
80/tcp open httpNmap done: 1 IP address (1 host up) scanned in 0.31 secondsWhy can't close the port 80?
sudo nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
iif "lo" accept comment "Accept any localhost traffic"
iif != "lo" ip daddr 127.0.0.0/8 counter packets 0 bytes 0 drop comment "drop connections to loopback not coming from loopback"
tcp dport { http } ct state established,new drop
tcp dport http drop
} chain forward {
type filter hook forward priority 0; policy accept;
} chain output {
type filter hook output priority 0; policy accept;
}
}Now insert it with:
sudo nft insert rule inet filter input tcp dport 80 drop
sudo nmap -p 80 127.0.0.1
Starting Nmap 7.70 ( https://nmap.org ) at 2021-05-02 08:29 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up.PORT STATE SERVICE
80/tcp filtered httpNmap done: 1 IP address (1 host up) scanned in 2.12 seconds | Why can't close the port 80 with nftables? |
By default nftables does not create any chains meaning every packet is accepted and allowed.
You have created three base chains input forward and output
The default policy if not specified for these is accept which is the same as not having any chains.
To block any packets by default you need to set default action policy to be drop like this for ex:
# Sees incoming packets that are addressed to and have now been routed to the local system and processes running there
add chain filter input {
type filter hook input priority filter; policy drop;
}# Sees packets that originated from processes in the local machine
add chain filter output {
type filter hook output priority filter; policy drop;
}And then explicitly add rules to those chains to allow specific traffic.For more info on creating chains see here.
You should find nftables wiki to be most useful to develop firewall. |
I installed nftables on a new Debian 12 (on AWS EC2). I am connected to the instance (the external EC2 IP) over port 22/tcp. I mention EC2 because maybe they do weird tricks. I then installed nftables with:
sudo apt install nftables
sudo systemctl enable nftables
sudo systemctl start nftablesAt this point, I have an empty configuration:
flush ruleset table inet filter {
chain input {
type filter hook input priority filter;
}
chain forward {
type filter hook forward priority filter;
}
chain output {
type filter hook output priority filter;
}
}My question is simple: I read, and would expect, that in the absence of any rule allowing any packet, my own session should be blocked, and I should be good to reboot. Why is this quite obviously not the case, and/or what is allowing my session?
I noticed that it is only if I explicitly block the flow that it indeed gets blocked.
| Does nftables allow port 22 or other ports by default? |
I'll complete OP's setup: address 192.168.1.38/24 on eth0 and a gateway (not really needed) 192.168.1.1. If the setup uses Wifi rather than actual Ethernet, the first method (bridge) won't be available without additional efforts (probably easy if Access Point, very difficult to impossible if not AP).nmap uses a packet socket (type AF_PACKET) to craft ARP requests, rather than using the kernel's network stack dealing with ARP cache and resolution. arping behave similarly (and will be used instead to simplify examples).tcpdump also uses AF_PACKET to capture.By contrast even other special tools such as ping, when they merely use AF_INET, SOCK_DGRAM, IPPROTO_IP or AF_INET, SOCK_RAW, IPPROTO_ICMP rather than AF_PACKET, will be filtered by iptables.Their methods can be verified using strace -e trace=%network (as root user) on these commands.
As presented in Packet flow in Netfilter and General Networking:AF_PACKET happens before (at ingress) or after (at egress) most of Netfilter's subsystems, ebtables (bridge), arptables or iptables and their equivalent nftables tables: the firewall is bypassed. tcpdump (or nmap) is able to read incoming packets because it captures them before the firewall, nmap is able to send ARP packets because it injects them after the firewall.
So with a standard setup, any packet generated by nmap or any other tool using AF_PACKET can't be filtered with arptables (or iptables).
There are ways to overcome this.
Old method: using a bridge and ebtables
It's bridging, thus in most cases not compatible with Wifi.
For ebtables (or nftables using bridge family) it's usually not a problem: when the ARP or IP packet that couldn't be filtered is turned into an Ethernet frame it re-enters the network stack at an other layer. Now it's within the network stack and will be affected by all the facilities there, including bridge firewall rules created with ebtables (or nftables with a bridge family). Using a bridge thus allows to overcome the firewall bypass.
Create a bridge, set eth0 as bridge port and move the addresses and routes on br0 (of course this should be done by reconfiguring the adequate network tool in use and/or not done remotely because of the temporary loss of connectivity):
ip link add name br0 up type bridge
ip link set dev eth0 master br0
ip addr flush dev eth0
ip addr add 192.168.1.38/24 brd + dev br0
ip route add default via 192.168.1.1 # not needed for this problemThen transform the arptables rules into ebtables rules. They'll still use INPUT and OUTPUT because these are the chains between the routing stack and (for lack of better term) the bridge stack.
ebtables -A OUTPUT -p ARP --arp-ip-dst 192.168.1.30 -j DROP
ebtables -A INPUT -p ARP --arp-ip-src 192.168.1.30 -j DROPOne roughly equivalent nftables ruleset could be (to load with nft -f somerulefile.nft):
add table bridge t # for idempotence
delete table bridge t # for idempotencetable bridge t {
chain out {
type filter hook output priority 0; policy accept;
arp daddr ip 192.168.1.30 drop
} chain in {
type filter hook input priority 0; policy accept;
arp saddr ip 192.168.1.30 drop
}
}(Additional filtering to limit the affected interface should probably be added.)
With one of these rules in place, running concurrently two tcpdump, one on br0 one on eth0 like this:
tcpdump -l -n -e -s0 -i br0 arp &
tcpdump -l -n -e -s0 -i eth0 arp &will show emission on br0 but not on eth0 anymore: the ARP packet which couldn't be blocked when injected is effectively blocked by the bridge layer. If the rules are removed, both interfaces will show traffic. Likewise for the reverse test from remote: the packet will be captured on eth0 but won't reach br0: blocked.
New method: nftables with netdev family and egress hook
⚠: requires Linux kernel >= 5.16 which provides the egress hook, and nftables >= 1.0.1 to use it. ingress has been available since kernel 4.2.
As there's no bridge involved, there's no change of network layout needed, and this will work the same for Ethernet or for Wifi.
In particular this commit presents use cases:netfilter: Introduce egress hook
Support classifying packets with netfilter on egress to satisfy user
requirements such as:outbound security policies for containers (Laura)
filtering and mangling intra-node Direct Server Return (DSR) traffic on a load balancer (Laura)
filtering locally generated traffic coming in through AF_PACKET, such as local ARP traffic generated for clustering purposes or DHCP
(Laura; the AF_PACKET plumbing is contained in a follow-up commit)[...]nftables provides access to additional Netfilter hooks (not described in previous schematic) in the netdev family working at the interface level: ingress and egress. These hooks are near AF_PACKET for ingress and egress (staying fuzzy because implementations details with regard to ingress/egress and capture/injection have some subtleties): egress is able to affect packets injected at AF_PACKET.
Base chains in netdev family tables must be linked to (an) interface(s). Using OP's initial setup, the previous nftables ruleset can be rewritten using the netdev family's syntax like this:
add table netdev t # for idempotence
delete table netdev t # for idempotencetable netdev t {
chain out {
type filter hook egress device eth0 priority 0; policy accept;
arp daddr ip 192.168.1.30 drop
} chain in {
type filter hook ingress device eth0 priority 0; policy accept;
arp saddr ip 192.168.1.30 drop
}
}tcpdump won't capture any injected ARP at egress: they were dropped before. Capture at ingress still happens first: tools relying on AF_PACKET (starting with tcpdump) can still capture them (but the firewall will drop them right after).Misc: There's also tc which is able to filter AF_PACKET sockets (if the tool doesn't use the option PACKET_QDISC_BYPASS). tc is more difficult to handle. Here's a Q/A with an answer of mine (at the time I wrote it my understanding and overall explanation was less acurate) having a simple example without filter.
|
I'm trying to implement a way to prevent network scans from my notebook. One of the things I want is to allow arp request to specific hosts, like my gateway.
I added some rules using arptables and they seem to work (at first)
arptables -A OUTPUT -d 192.168.1.30 -j DROP
arptables -A INPUT -s 192.168.1.30 -j DROPThis is actually blocking arp requests to this host. If I run:
tcpdump -n port not 22 and host 192.168.1.38 (target host)and run:
arp -d 192.168.1.30; ping -c 1 192.168.1.30; arp -n (notebook)tcpdump shows no incoming packets on the target and arp -n on the notebook show (incomplete)
But if I run nmap -sS 192.168.1.30 on my notebook I get on the target host:
22:21:12.548519 ARP, Request who-has 192.168.1.30 tell 192.168.1.38, length 46
22:21:12.548655 ARP, Reply 192.168.1.30 is-at xx:xx:xx:xx:xx:xx, length 28
22:21:12.728499 ARP, Request who-has 192.168.1.30 tell 192.168.1.38, length 46
22:21:12.728538 ARP, Reply 192.168.1.30 is-at xx:xx:xx:xx:xx:xx, length 28but an arp -n on the notebook still shows incomplete, but the nmap detects the host.
I also tried using nftables and ebtables with no success.
How can I prevent nmap to send arp request and finding the host?
| arptables not working with nmap |
The required information to reply correctly: "From which interface did the initial packet come from?" is lost when a different packet is seen in reply coming from a network interface. A way to memorize it and reuse it in replies is needed. Here it is: conntrack which memorizes a list of all currently tracked connections. It can also store, per flow a mark which is then called connmark. One can give a meaning to a value.
This allows to apply policy routing to a whole flow rather than just to an individual packet.
The blog To Linux and beyond ! explains it there: Netfilter Connmark.
The idea is to memorize the information when a flow is using the WireGuard interface so that reply packets are associated to the same flow, and routed back through the WireGuard interface instead of the usual route they should have taken. This can be handled by an independant nftables table and associated routing tables/rules to alter the normal fate of the packet. Using arbitrary mark value 0xf00 with the meaning "this flow came from wg0" and arbitrary routing table 1000 to select wg0 for replies.
Prepare the adequate routing table and routing rules to override the normal routes for reply packets:
ip route add default dev wg0 table 1000
ip rule add fwmark 0xf00 lookup 1000If IPv6 is also used (but OP's configuration lacks IPv6 in AllowedIPs) then also:
ip -6 route add default dev wg0 table 1000
ip -6 rule add fwmark 0xf00 lookup 1000example using systemd-networkd:
[Match]
Name=wg0[Network]
Address=10.0.0.2/24
Address=fdc9:281f:04d7:9ee9::2/64[Route]
Gateway=0.0.0.0
Table=1000[RoutingPolicyRule]
Table=1000
FirewallMark=0xf00And add the table below:
replywg0.nft:
table inet replywg0 # for idempotency
delete table inet replywg0 # for idempotencytable inet replywg0 {
chain prerouting {
type filter hook prerouting priority -150; policy accept;
iif wg0 ct mark set 0xf00
iif != wg0 ct mark 0xf00 meta mark set 0xf00
} chain output {
type route hook output priority -150; policy accept;
ct mark 0xf00 meta mark set 0xf00
}
}load with:
nft -f replywg0.nftNotes and caveats:the packet mark, the only one which affects routing, is not set when coming from wg0, only the connmark is set: else it would select the single route defined in table 1000 and would route back the packet from where it came instead of proceeding to the local system or to Docker containers. If routing table 1000 included all the (unknown to me) relevant additional routes, there would be no need for this special handling.the output chain is completely optional if that's only for Docker traffic. It would be useful only for local reply traffic: traffic received from wg0 that didn't go to Docker but to the local system. Note the use of type route instead of type filter here, so a new route lookup can still happen. Anyway some UDP services won't reply properly here: the IP source address would be the wrong one (the one set on enp41s0) and some additional imperfect NAT band-aid would be needed (for example masquerade in table inet nat output).Be careful if other needs for marks are put in place later (even WireGuard itself has an option that can set marks): they will likely clash if not handled together properly.FTP (which appears in OP's settings) is special: it uses additional connections for data and can require an ALG (usually in passive mode for the server case). As long as it's not encrypted, on Linux it can be handled with the kernel modules nf_conntrack_ftp+nf_nat_ftp and adequate settings, which are described for example there: Secure use of iptables and connection tracking helpers and have (subtly different) equivalent nftables settings. As RELATED traffic (the data traffic) inherits the connmark, the settings described in that blog would even work along the settings in this answer. The toggle nf_conntrack_helper is being completely removed from kernels >= 6.0, so this becomes a required setting.I'm assuming that rp_filter has not the value 1 because else probably nothing would have worked through wg0 in the first place. If it was set to 1, additional settings would be required in several places. |
I use WireGuard as a secure communication channel between two servers in different DCs to hide the existence of the end server (server B).
I use nftables as a firewall management tool.
From public server A, the traffic is forwarded keeping the IP address (necessary for the application).
The destination server B receives the packets and the application processes them, but eventually, the server tries to return the packet to the original IP (the sender IP of the original packet) and this becomes a problem.
Masquerading the original IP looks like a simple solution, but it is necessary to keep the original IP already on server B to route these packets back to the WireGuard tunnel.
The tcpdump of the server B:
1:02:36.675958 wg0 In IP 1.2.3.4.54617 > 10.0.0.2.21: Flags [S], seq 1265491449, win 64240, options [mss 1452,nop,wscale 8,nop,nop,sackOK], length 0
1:02:36.675980 docker0 Out IP 1.2.3.4.54617 > 172.16.0.2.21: Flags [S], seq 1265491449, win 64240, options [mss 1452,nop,wscale 8,nop,nop,sackOK], length 0
1:02:36.676030 docker0 In IP 172.16.0.2.21 > 1.2.3.4.54617: Flags [S.], seq 1815055360, ack 1265491450, win 64240, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
1:02:36.676033 enp41s0 Out IP 10.0.0.2.21 > 1.2.3.4.54617: Flags [S.], seq 1815055360, ack 1265491450, win 64240, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 01.2.3.4 - original IP
10.0.0.2 - IP wireguard on B server
172.16.0.2 - docker network, let's pretend this is an application (everything works there correctly)Unfortunately, I have not come up with a solution to this problem, so I am asking for your help. Is it still possible? If yes, by what means?
Update
I decided to use HAProxy, but I don't think that's a very high-performance solution.
So I'm still in need of possible solutions to this problem.
Use systemd-networkd to configure the WireGuard tunnel:
# sudo cat /etc/systemd/network/99-wg0.netdev[NetDev]
Name=wg0
Kind=wireguard
Description=WireGuard tunnel wg0[WireGuard]
ListenPort=51820
PrivateKey=[key][WireGuardPeer]
PublicKey=[key]
PresharedKey=[key]
AllowedIPs=0.0.0.0/0
Endpoint=[server A]:51820# sudo cat /etc/systemd/network/99-wg0.network[Match]
Name=wg0[Network]
Address=10.0.0.2/24
Address=fdc9:281f:04d7:9ee9::2/64# sudo ip rule:
0: from all lookup local
32766: from all lookup main
32767: from all lookup defaultnftables configurations:
# sudo nft list ruleset
[0]
table inet filter {
chain allow {
ct state invalid drop comment "early drop of invalid connections"
ct state { established, related } accept comment "allow tracked connections" ip protocol icmp accept comment "allow icmp"
meta l4proto ipv6-icmp accept comment "allow icmp v6" icmp type echo-request limit rate over 10/second burst 4 packets drop comment "No ping floods"
icmpv6 type echo-request limit rate over 10/second burst 4 packets drop comment "No ping floods"
} chain wireguard {
tcp dport 21 accept
} chain input {
type filter hook input priority filter; policy drop; iif "lo" accept comment "allow from loopback" tcp dport 22 ct state new limit rate 15/minute accept comment "Avoid brute force on SSH"
tcp dport 22 accept comment "allow sshd" ip6 saddr [server A] udp dport 51820 accept comment "Accept wireguard connection from proxy1.vps-da4c9ada.ovh.zolotomc.ru" jump allow comment "allowed traffic for input" meta pkttype host limit rate 5/second counter packets 1047 bytes 43403 reject with icmpx admin-prohibited
reject with icmpx host-unreachable
} chain forward {
type filter hook forward priority filter; policy drop; iif "docker0" accept comment "allow outgoing traffic from docker" jump allow comment "allowed traffic for forward" iif "wg0" jump wireguard comment "Wireguard chain" reject with icmpx host-unreachable
} chain output {
type filter hook output priority filter; policy accept;
}
}table inet nat {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;
iif "wg0" jump wireguard comment "Wireguard chain"
} chain wireguard {
tcp dport 21 dnat ip to 172.16.0.2
tcp dport 21 dnat ip6 to fe80::a8d7:f6ff:fe0b:4774
} chain input {
type nat hook input priority 100; policy accept;
} chain output {
type nat hook output priority -100; policy accept;
} chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
iif "docker0" oif != "docker0" masquerade
}
}I tried using Setting packet metainformation, but it reassigns the port to a new random port.
And yet, I'm starting to think that it's impossible or too complicated based on just nftables rules.
| Forwarding traffic back through WireGuard (Setting dnat for oif wg0, after processing the application) |
The nat hook (as all other hooks) is provided by Netfilter to nftables. The NAT hook is special: only the first packet of a connection is traversing this hook. All other packets of a connection already tracked by conntrack aren't traversing any NAT hook anymore but are then directly handled by conntrack to continue performing the configured NAT operations for this flow.
That explains why you should never use this hook to drop: it won't affect already tracked connections, NAT-ed or not.
Just change the hook type from type nat to type filter for the part dropping traffic. Contrary to iptables a table is not limited to one hook type and actually has to use multiple types for this kind of case, because the set is local to a table and can't be shared across two tables. For the same reason, this table should logically not be called inet nat anymore because it's not just doing NAT (but I didn't rename it).
So in the end:
nftables.conf:
table inet nat { set blocked {
type ipv4_addr
} chain block {
type filter hook postrouting priority 0; policy accept;
ip daddr @blocked counter drop
} chain postrouting {
type nat hook postrouting priority 100; policy accept;
oifname "ppp0" masquerade
iifname "br-3e4d90a574de" masquerade
}
}Now:all packets will be checked by the inet nat block chain allowing the blocked set to immediately affect the traffic rather than having to wait for the next flow to be affected.as usual only the first packet of a new flow (tentative conntrack state NEW) will traverse the inet nat postrouting chain.Please also note that iifname "br-3e4d90a574de" masquerade; requires a recent enough kernel (Linux kernel >= 5.5): before only filtering by outgoing interface was supported in a postrouting hook. Also, this looks like a Docker-related interface, and adding this kind of rule might possibly interact with Docker (eg: it might do NAT on traffic between two containers in the same network) because it's referencing a bridge interface. That's because Docker makes bridged traffic seen by nftables (as well as iptables) by loading the br_netfilter module).
|
I have the following in nftables.conf:
table inet nat { set blocked {
type ipv4_addr
}
chain postrouting {
type nat hook postrouting priority 100; policy accept; ip daddr @blocked counter drop;
oifname "ppp0" masquerade;
iifname "br-3e4d90a574de" masquerade;
}
}The set blocked is a named set which can be updated dynamically. It is in this set I wish to have a collection of IPs to block, updated every n minutes. In order to preserve the atomicity, I am not using the following (updateblock.sh) to update the list:
#!/bin/bashsudo nft flush set inet nat blocked
sudo nft add element inet nat blocked {$nodes}But rather blockediplist.ruleset:
#!/usr/sbin/nft -fflush set inet nat blocked
add element inet nat blocked { <example_ip> }I use the following order of commands:
nft -f /etc/nftables.conf
nft -f blockediplist.rulesetHowever the changes in blockediplist.ruleset are not immediately applied. I know the ruleset now contains the new IPs because the IPs are present in nft list ruleset and nft list set inet nat blocked. Even just with nft add element inet nat blocked { <IP> } is the IP not being instantly blocked.
An alternative method would be to define a new set and reload nftables.conf in its entirety, though I think this would be a poor and inefficient way of doing things.
Is there a way to force the changes in blockediplist.ruleset to be applied immediately?
UPDATE: I've just discovered that when I block an IP which I haven't pinged, it gets blocked instantly. However when adding an IP to the blocklist mid-ping it takes a while for it to be blocked. When I try a set with netdev ingress the IP gets blocked instantly. Maybe this avenue of investigation might reveal something.
| nftables Named Set Update Delay |
Explanationthe rule set through iptables-nft is using xtables kernel modules (here: xt_tcpmss and xt_TCPMSS) through the nftables kernel API along a compatibility layer API, even if xtables was initially intended for (legacy kernel API) iptables.
Native nftables cannot use xtables kernel modules by design: whenever xtables is in use, it's not native anymore, and the userland nft command (or its API) deals only with native nftables. Use of xtables is reserved for the compatibility layer. So when displayed through nft any such unknown module is displayed commented out (but see later).the current version of iptables-nft wasn't yet able to translate automatically the iptables rule into native nftables rule (for this case) or there is no equivalent native nftables rule to translate to (eg: ponder the LED target).
Here nft sees there are xtables modules that aren't translatable by the common translation engine so considers this part as off limits and adds a comment on the untranslatable part, but still translates what it knowns about. The call to xtables modules can be seen with --debug=netlink:
# nft -a --debug=netlink list ruleset
ip mangle FORWARD 2
[ meta load oifname => reg 1 ]
[ cmp eq reg 1 0x30707070 0x00000000 ]
[ meta load l4proto => reg 1 ]
[ cmp eq reg 1 0x00000006 ]
[ match name tcp rev 0 ]
[ match name tcpmss rev 0 ]
[ counter pkts 0 bytes 0 ]
[ target name TCPMSS rev 0 ]table ip mangle { # handle 167
chain FORWARD { # handle 1
type filter hook forward priority mangle; policy accept;
oifname "ppp0" meta l4proto tcp tcp flags & (syn|rst) == syn # tcpmss match 1400:65495 counter packets 0 bytes 0 tcp option maxseg size set rt mtu # handle 2
}
}above: match and target mean xtables modules. Since nftables uses about the same engine as iptables-translate in this regard, one could guess -m tcpmss --mss 1400:65495 got an issue because that's the one which starts with a comment and isn't translated in the output, while the last part was translated. Whatever nft displays back here is only for the display and mustn't be taken as the actual rules.
The actual rules are the bytecodes shown with --debug=netlink (plus non-visible parts for xtables specific data), so this bytecode is the proof that this rule is doing something. It's just not useful to native nftables.Native nftables version
For example most of OP's iptables rule can be natively translated:
# iptables-translate -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
nft add rule ip mangle FORWARD tcp flags & (syn|rst) == syn counter tcp option maxseg size set rt mtuWhile there's no automatic translation available currently for -m tcpmss --mss , the feature is available: tcp option maxseg size which can be used either as an expression (the equivalent of the match -m tcpmss --mss) or with set as a statement (the equivalent of the target -j TCPMSS). Below would be the result of such translation (and might be in the future once the translation engine is improved):
nft add rule ip mangle FORWARD 'oifname ppp0 tcp flags & (syn|rst) == syn tcp option maxseg size 1400-65495 counter tcp option maxseg size set rt mtu'The second value 65495 is probably useless (one could just use tcp option maxseg size >= 1400 instead).Note
nft and iptables-nft can be misleading when trying to abuse translations. For example using iptables v1.8.7 (nf_tables) and nft v1.0.0, one can get this:
# iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
# nft list ruleset | tee /tmp/mss.nft
table ip mangle {
chain FORWARD {
type filter hook forward priority mangle; policy accept;
meta l4proto tcp tcp flags & (syn|rst) == syn counter packets 0 bytes 0 tcp option maxseg size set rt mtu
}
}
# nft flush ruleset
# nft -f /tmp/mss.nft
# iptables-save
# Table `mangle' is incompatible, use 'nft' tool.That's because while nft was able to translate the whole ruleset when displaying it, the initial compat ruleset was still including an xtables target in the bytecode (the same as seen above: [ target name TCPMSS rev 0 ]): iptables-nft didn't behave as iptables-translate and nft's output hid the difference. Once run again through the nft command, the result is full native nftables with tcp option maxseg size set rt mtu instead of -j TCMPSS --clamp-mss-to-pmtu . But the iptables-save command doesn't recognize the result anymore as iptables-translated code and can't translate it back into iptables format.
Don't dump blindly iptables rules using nftables to reload them with nftables, this can bite later if iptables is still needed.
|
My pppoe client automatically adds an iptables rule iptables -t mangle -o "$PPP_IFACE" --insert FORWARD 1 -p tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1400:65495 -j TCPMSS --clamp-mss-to-pmtu from /etc/ppp/ip-up.d. However, this rule in nftables looks like
table ip mangle {
chain FORWARD {
type filter hook forward priority mangle; policy accept;
oifname "ppp0" meta l4proto tcp tcp flags & (syn|rst) == syn # tcpmss match 1400:65495 counter packets 714 bytes 42388 tcp option maxseg size set rt mtu
}
}Why contents after tcpmss is commented and this rule seems to do nothing?
| Why MSS clamping in iptables(-nft) seems to take no effect in nftables? |
By itself netfilter-persistent does nothing, it's a plugin framework only. It does not read /etc/nftables.conf in particular. You should have nftables plugins installed under /usr/share/netfilter-persistent/plugins.d for it to work with nftables. However, I don't think such plugins are available in any package; you'll have to write them yourself (in the language you prefer).
|
I tried this:
iptables -F
ip6tables -Fsudo nft list ruleset > /etc/nftables.confsudo service netfilter-persistent saveBut after reboot when I run
nft list ruleset I see nothing. I think that netfilter-persistent doesn't see my file with rules. I use Debian Buster.
| Nftables rules disappeared after reboot! |
This was a syntax limitation of nftables 0.7 (or a few other versions): it didn't consider ICMP and ICMPv6 directly usable in the dual IPv4/IPv6 table inet without stating explicitly which IP protocol first:
So the rule:
icmp type echo-request ct state new acceptto work both on IPv4 and IPv6 has to be written twice like this:
UPDATE: actually one shouldn't rely for IPv6 on nexthdr pointing to the upper-layer protocol: there can be Extension Headers between the Fixed Header and the upper-layer header (which comes last). Adding the correct syntax (using the meta-informations already providing protocol informations), and leaving my original answer striked, because I don't know if the "correct" syntax is valid with nftables 0.7:
meta nfproto ipv4 meta l4proto icmp icmp type echo-request ct state new accept
meta nfproto ipv6 meta l4proto icmpv6 icmpv6 type echo-request ct state new acceptip protocol icmp icmp type echo-request ct state new accept
ip6 nexthdr icmpv6 icmpv6 type echo-request ct state new acceptgiving the corresponding bytecode (displayed using nft --debug=netlink list ruleset -a):
inet filter input 9 8
[ meta load nfproto => reg 1 ]
[ cmp eq reg 1 0x00000002 ]
[ payload load 1b @ network header + 9 => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ payload load 1b @ transport header + 0 => reg 1 ]
[ cmp eq reg 1 0x00000008 ]
[ ct load state => reg 1 ]
[ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ]
[ cmp neq reg 1 0x00000000 ]
[ immediate reg 0 accept ]inet filter input 10 9
[ meta load nfproto => reg 1 ]
[ cmp eq reg 1 0x0000000a ]
[ payload load 1b @ network header + 6 => reg 1 ]
[ cmp eq reg 1 0x0000003a ]
[ payload load 1b @ transport header + 0 => reg 1 ]
[ cmp eq reg 1 0x00000080 ]
[ ct load state => reg 1 ]
[ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ]
[ cmp neq reg 1 0x00000000 ]
[ immediate reg 0 accept ]ICMP is IP protocol 1, echo-request value 8.
ICMPv6 is IPv6 protocol 58 (0x3a), its echo-request value 128 (0x80).
Newer nftables 0.9 accepts directly the rule icmp type echo-request ct state new accept, but its corresponding bytecode is then only:
inet filter input 9 8
[ meta load nfproto => reg 1 ]
[ cmp eq reg 1 0x00000002 ]
[ meta load l4proto => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ payload load 1b @ transport header + 0 => reg 1 ]
[ cmp eq reg 1 0x00000008 ]
[ ct load state => reg 1 ]
[ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ]
[ cmp neq reg 1 0x00000000 ]
[ immediate reg 0 accept ]meaning it's dealing only with ICMP, not also ICMPv6, which should still be added with an additional rule, simply as:
icmpv6 type echo-request ct state new acceptgiving back the equivalent bytecode of former version:
inet filter input 10 9
[ meta load nfproto => reg 1 ]
[ cmp eq reg 1 0x0000000a ]
[ meta load l4proto => reg 1 ]
[ cmp eq reg 1 0x0000003a ]
[ payload load 1b @ transport header + 0 => reg 1 ]
[ cmp eq reg 1 0x00000080 ]
[ ct load state => reg 1 ]
[ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ]
[ cmp neq reg 1 0x00000000 ]
[ immediate reg 0 accept ] |
I am trying to build a simple stateful firewall with nftables following the Arch Linux nftables guide. I posted this question on the Arch Linux forum and never received an answer.
After completing the guide and rebooting my machine, systemd failed to load the nftables.service. To troubleshoot the error I ran:
systemctl status nftablesHere is the relevant output:
/etc/nftables.conf:7:17-25: Error: conflicting protocols specified: inet-service v. icmpThe error is complaining about a rule that I set for accepting new pings (icmp) in the input chain. Here is the rule and I don’t see anything wrong with it:
icmp type echo-request ct state new acceptIf I remove the rule it will work. But I want the rule.
Here is my ruleset in nftables.conf after completing the guide:
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
iif "lo" accept
ct state invalid drop
icmp type echo-request ct state new accept
ip protocol udp ct state new jump UDP
tcp flags & (fin | syn | rst | ack) == syn ct state new jump TCP
ip protocol udp reject
ip protocol tcp reject with tcp reset
meta nfproto ipv4 counter packets 0 bytes 0 reject with icmp type prot-unreachable
} chain forward {
type filter hook forward priority 0; policy drop;
} chain output {
type filter hook output priority 0; policy accept;
} chain TCP {
tcp dport http accept
tcp dport https accept
tcp dport ssh accept
tcp dport domain accept
} chain UDP {
tcp dport domain accept
}
}What am I missing? Thank you in advance.
| Nftables configuration error: conflicting protocols specified: inet-service v. icmp |
awk can build up a string to execute that can be held below a
particular length; this will doubtless be more efficient than an
xargs that must fork an echo ... | tr pipeline.
#!/usr/bin/env awk
{
len = length()
if (len > limit) {
if (length(outbuf)) {
system("echo " ENVIRON["TARGET_SET"] outbuf)
}
outbuf = $0
# the limit will be 4096 minus the length of the rest of
# the command and if the 4096 is kern.argmax then you
# may need to subtract even more to account for the
# length of the environment variables!
limit = 6
} else {
# join another field to output buffer
outbuf = outbuf "," $0
}
# the plus is for the length of the field joiner
limit -= len + 1
}
END {
if (length(outbuf)) {
system("echo " ENVIRON["TARGET_SET"] outbuf)
}
}The "echo " in this script are for testing, one would presumably
instead run the suitable nft ... command. TARGET_SET must be
exported so that awk can read it from the environment.
|
So I have several files which contain CIDR entries (such as 1.1.1.0/24). The task is to add entries from these files to one NFTables set using a bash script. In doing so, I am limited to OpenWRT utilities.
The catch is that there can be many entries in these files and they can exhaust the limit of 4096 characters per command. And also these files are automatically updated by cron, so a set needs to be periodically erased and re-filled as well.
It seems to me that there is an easier way to do this than I have already done it. I also want to reduce the execution time of this mess. Here is my attempt to do this.
nft add element $TARGET_SET { $(awk '{print $1 ", "}' "$CUSTOM_CIDRS_FILE") }Here's another question, if my file has a very large number of entries, will I overcome this limit of 4096 characters per command? And one last question, will it take a very long time to form a set if I add entries one at a time in a loop?
I'm waiting primarily for answers with good practice.
| How to add CIDRs from files to set in best way? |
If you're using ansible's template module, then you can configure the variable interpolation markers used by Jinja:Also, you can override jinja2 settings by adding a special header to template file. i.e. #jinja2:variable_start_string:'[%', variable_end_string:'%]', trim_blocks: False which changes the variable interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things that look like, but should not be Jinja2.The jinja documentation has a list of the various markers that can be changed.
|
I am trying to template stateful nftables configuration file with Ansible.Ansible uses Jinja for templating and Jinja uses curly braces for variables:{{ variable }}Nftables configuration uses curly braces for grouping variables together:{ 192.168.3.0/24, 192.168.1.0/24 }.Escaping Jinja2 curly braces looks like this:{%raw%} { {%endraw%} or like this:
{{ '{' }}
This look extremely UGLY and hard to read.
Any way to make NFtables use a different character than curly braces? Like [ or ( or <
| Can I use something else than curly bracket in nftables.conf? |
You can grant specific users the ability to run specific commands with sudo using sudoers files that you place under /etc/sudoers.d/.
The format for this that you could use is:
user host=(who to run as) [Options] Command
Note: You should do all of your editing in visudo.
So if you wanted to give the www-data the privilege to run all nft as root you could create the file /etc/sudoers.d/www-data with the following contents:
www-data ALL = (root) nftSince this is in a script, you probably do not want to be prompted for a password.
In this case you will want to add the NOPASSWD: option:
www-data ALL = (root) NOPASSWD: nftIn the event that you only wanted to allow nft add you could do the following:
www-data ALL = (root) NOPASSWD: nft add*And in the event that you wanted to allow the www-data user to run more than just one command as root you can comma separate the commands:
www-data ALL = (root) NOPASSWD: nft, ls, catThis would allow www-data to sudo nft or ls or cat without a password.
Note: be careful when editing sudoers files. In the event that your syntax is incorrect any sudo command by any user will error out.
You can use visudo -c sudofile to validate the file. So for the example file I've been using visudo -c /etc/sudoers.d/www-data. If that commands outputs www-data: parsed OK then the syntax is correct.
|
I am setting up a captive portal on a Raspberry Pi (latest distro CLI) where my web application will change NFTables rules based on who is logged in. I have a LEMP stack set up; with Laravel 8 as the PHP framework.
Nginx/php user is www-data, this user has a sudoers file setup with www-data ALL=(ALL) NOPASSWD:/var/www/vportal.getvs.net/app/Python/vportal.py
The following ls -al for the python script: -rwxr-xr-x 1 pi www-data 765 Dec 21 11:19 vportal.py
In the Laravel Controller code:
$process = new Process(['python3','/var/www/vportal.getvs.net/app/Python/vportal.py']);
$process->run(); // executes after the command finishes if (!$process->isSuccessful()) {
throw new ProcessFailedException($process);
}
echo $process->getOutput();The Python script:
import subprocess
#import os
#import pwd
#print(pwd.getpwuid(os.getuid()).pw_name)
subprocess.run("sudo nft add table nat", shell=True)
subprocess.run("sudo nft 'add chain nat postrouting { type nat hook postrouting priority 100; }'", shell=True)
subprocess.run("sudo nft add rule ip nat postrouting oifname \"wlan0\" masquerade", shell=True)
subprocess.run("sudo nft add table ip filter", shell=True)
subprocess.run("sudo nft 'add chain ip filter forward { type filter hook forward priority 0; policy accept; }'", shell=True)
subprocess.run("sudo nft add rule ip filter forward iifname \"wlan0\" oifname \"wlx000e3b337325\" ct state related,established accept", shell=True)
subprocess.run("sudo nft add rule ip filter forward iifname \"wlx000e3b337325\" oifname \"wlan0\" accept", shell=True)The idea is to have a route in Laravel 8 trigger the python script through the process command from Symfony on a particular route with the controller. I can run commands that don't require sudo without issues, but my scripts dont want anything to do with sudo. Is there a way to allow www-data to run a specific script "safely" with sudo privileges?
Note: this is only used on a local network and won't touch the internet. Not that it's any less of a risk that way, but I figured I would at least note this.
| Unable to run scripts as sudo with the www-data user in Debian |
Alas, this feature was added with this commit made available only since nftables 0.9.7. Your ruleset works as-is when tested with nftables 0.9.8.src: allow to use variables in flowtable and chain devices
This patch adds support for using variables for devices in the chain
and flowtable definitions, eg.
define if_main = lotable netdev filter1 {
chain Main_Ingress1 {
type filter hook ingress device $if_main priority -500; policy accept;
}
}Signed-off-by: Pablo Neira Ayuso [emailprotected]A netdev family chain registers to one or multiple (since kernel 5.5 and nftables 0.9.3) interface(s), which must all exist before the chain definition. A wildcard can't be used.
The multidevice chain syntax is slightly different:
table netdev filter {
chain ingress {
type filter hook ingress devices = { ens33, ens34 } priority -500; # ...
}
}Or with nftables >= 0.9.7:
define extifs = { ens33, ens34 }table netdev filter {
chain ingress {
type filter hook ingress devices = $extifs priority -500; # ...
}
}Having only one interface (eg: { ens33 }) is displayed back with the previous existing syntax.
|
For a netdev filter table's ingress hook I'd like to store the device name in a variable, but I somehow can't figure out the correct syntax.
It works as follows:
table netdev filter {
chain ingress {
type filter hook ingress device ens33 priority -500; # ...
}
}... but I would like to use a variable in place of ens33 on the line:
type filter hook ingress device ens33 priority -500;When I use the following, I get an error:
define extif = ens33table netdev filter {
chain ingress {
type filter hook ingress device $extif priority -500; # ...
}
}The error reads:
Error: syntax error, unexpected '$', expecting string or quoted string or string with a trailing asteriskNow I also tried ens* hoping for it to be similar to ens+ in iptables, but then the error changes to the one I also encounter when giving an invalid device name:
Error: Could not process rule: No such file or directory
chain ingress {
^^^^^^^Similarly quoting didn't work for me. The documentation also didn't provide the clue that could make it work.
How can I place the name (or names) of my external interfaces in a variable in order to use them as parameter for device on the type filter hook ... stanza?The kernel is 5.8 and the system is Ubuntu 20.04. nftables reports as v0.9.3 (Topsy).
| How to use variable for device name when declaring a chain to use the (netdev) ingress hook? |
try this one
table inet filter {
set blackhole_4 {
type ipv4_addr
flags timeout
}
set blackhole_6 {
type ipv6_addr
flags timeout
}
set greed_4 {
type ipv4_addr
flags dynamic
size 128000
}
set greed_6 {
type ipv6_addr
flags dynamic
size 128000
}
chain input {
type filter hook input priority 0;
ct state new tcp flags syn tcp dport 8000 add @greed_4 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop
ct state new tcp flags syn tcp dport 8000 add @greed_6 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop
}
}EDIT: Explanation from @User1404316: Since @Zip May (correctly) asked for some explanation. As I understand it: ct introduces a connection tracking rule, in this case for new tcp connections, that if they are heading for port 8000 (dport is destination port), add the source IPv4 to the pre-defined collection set greed_4. If that happens, the rule continues with the first bracket condition, that if the source address has more than three active connections, add the source IPv4 to the second predefined set blackhole_4, but only keep it there for one minute, and if we have gotten this far along in the rule, then drop the connection.
The original posted answer had two of its long lines truncated, but I figured out what I think they should be and inserted them above. The good news is that my testing indicates that this answer works!
A remaining curiousity for me is how to decide when to set sizes for the collection sets, and how large to make them, so I just left things the way they are for now.
|
In setting up dynamic blacklists for nftables, per A.B.'s excellent answer, I'm encountering an error when duplicating the blacklist for both ipv4 and ipv6. I perform the following command-line operation (debian nftables) (EDIT: The original question was for a prior version, 0.9.0; during the back-and-forth comment process, it was upgraded to the more current version 0.9.3, so the accepted answer below is valid for the version 0.9.3 API):
nft flush ruleset && nft -f /etc/nftables.conffor a config file including:
tcp flags syn tcp dport 8000 meter flood size 128000 { ip saddr timeout 20s limit rate over 1/second } add @blackhole_4 { ip saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter flood size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter greed size 128000 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter greed size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } dropand get the following error response:
/etc/nftables.conf:130:17-166: Error: Could not process rule: Device or resource busy
tcp flags syn tcp dport 8000 meter flood size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/etc/nftables.conf:132:17-145: Error: Could not process rule: Device or resource busy
tcp flags syn tcp dport 8000 meter greed size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Also, I'm not sure what the size is a measure of; it's set to 128000 because that's what I saw somewhere authoritative.
EDIT: Okay. I decided to continue playing, and see that creating separate meters for each ipv6 rule causes the error message to go away, but I don't understand why, so instead of answering my own question, I'll leave it open for someone who has a knowledgeable explanation why the meters can't be shared. The following produces no errors:
tcp flags syn tcp dport 8000 meter flood_4 size 128000 { ip saddr timeout 20s limit rate over 1/second } add @blackhole_4 { ip saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter flood_6 size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter greed_4 size 128000 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop
tcp flags syn tcp dport 8000 meter greed_6 size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } dropEDIT: At the time of this writing, the man pages for nftables use the term meter, but according to the nftables wiki, the term has been deprecated in favor of set, which requires a definition including a specific protocol type (eg. ipv4_addr), so if nftables is currently mapping the term meter to the newer set, that would explain why a single meter can't currently be shared between ipv4_addr and ipv6_addr. However, the example given in the nftables wiki itself is also not up-to-date: it generates an error because dynamic is not currently (nftables v0.9.0) a valid flag type. Back to the man pages, and we can see that set has flags of either type constant, interval, or timeout, and I'm uncertain which would be appropriate for this purpose.
EDIT: The "count" form of metering seems to have moved to a separate part of nftables: ct (connection tracking). It seems that one should now create defintions such as:
set greed_4 {
type ipv4_addr
flags constant
size 128000
} set greed_6 {
type ipv6_addr
flags constant
size 128000
} and then the following rules may be close, but still generate errors:
ct state new add @greed_4 { tcp flags syn tcp dport 8000 ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop
ct state new add @greed_6 { tcp flags syn tcp dport 8000 ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop | nftables dynamic blacklisting both IPv4 and IPv6 |
That's the compatibility table and chains created by the newer version of the ebtables command, used to manipulate bridges, but using the nftables kernel API in ebtables compatibility mode. Something ran an ebtables command somewhere, even if just to verify there's no ebtables rule present, or maybe to auto-load some ebtables ruleset, which was converted into an nftables ruleset.
You can know that's it by a few methods (here on CentOS8):actual executable
# readlink -e /usr/sbin/ebtables
/usr/sbin/xtables-nft-multiversion displayed
# ebtables -V
ebtables 1.8.2 (nf_tables)rule monitoring
term1:
# nft -f /etc/sysconfig/nftables.conf
# nft monitor #command waits in event modeterm2:
# ebtables -L
Bridge table: filterBridge chain: INPUT, entries: 0, policy: ACCEPTBridge chain: FORWARD, entries: 0, policy: ACCEPTBridge chain: OUTPUT, entries: 0, policy: ACCEPTterm 1 again (Fedora's newer nftables version would display a bridge's -200 priority value with its symbolic equivalent filter):
add table bridge filter
add chain bridge filter INPUT { type filter hook input priority -200; policy accept; }
add chain bridge filter FORWARD { type filter hook forward priority -200; policy accept; }
add chain bridge filter OUTPUT { type filter hook output priority -200; policy accept; }
# new generation 7 by process 16326 (ebtables)As the base chains include no rules and have an accept policy, nothing will be affected. The system also requires the presence of a bridge to have this table and chains used at all anyway.
If CentOS8 and your current Fedora version are still close enough, this might be created by the use of the systemd ebtables service from the iptables-ebtables package. If you don't need bridge filtering, you can consider removing this package. You can still use nft for it if really needed.
The fact that the added table is of family bridge tells it's ebtables rather than iptables, ip6tables or arptables which would all give the same behaviour, creating if not already present a different table family (resp. ip, ip6 or arp) and its base chains. So one should avoid using the same table names to avoid any clash, or at least not the same table+chain combination (eg: an nft rule in the ip filter INPUT (uppercase) chain could clash with iptables etc.)
more informations about this here:
Moving from iptables to nftables - nftables wiki
Legacy xtables tools - nftables wiki
Using iptables-nft: a hybrid Linux firewall - Red HatAbout the additional question:
Your rules appear to allow basic client usage (including SSH access from LAN), one important exception notwithstanding:
udp sport 53 acceptwill allow to access any UDP port of your system, as long as the "scan" is made from UDP source port 53.
Replace it with this more sensible rule:
iif lo acceptto allow local communication unhindered (including a possible local DNS server).
|
This is my /etc/sysconfig/nftables.conf
#!/usr/sbin/nft -f
flush ruleset
table ip filter {
chain input {
type filter hook input priority filter; policy accept;
ct state established,related counter packets 264 bytes 17996 accept
ct state invalid drop
tcp dport 22 ip saddr 192.168.0.0/16 accept
udp sport 53 accept
drop
} chain forward {
type filter hook forward priority filter; policy accept;
} chain output {
type filter hook output priority filter; policy accept;
}
}When I use nft -f /etc/sysconfig/nftables.conf but after I reboot, I also get these additional rules below the table I showed above:
table bridge filter {
chain INPUT {
type filter hook input priority filter; policy accept;
} chain FORWARD {
type filter hook forward priority filter; policy accept;
} chain OUTPUT {
type filter hook output priority filter; policy accept;
}
}What is it that I do not understand?
Additional question. I'm trying to harden a machine. The machine should be used for essentially browsing the web, so that has to be allowed. And I want to be able to ssh to it from the local network. Have I missed something essential?
| nftables changes on reboot |
You have two problems:using a too old version of nftables.
I could reproduce the error Error: syntax error, unexpected saddr, expecting comma or '}' using nftables version 0.7 (as found in Debian 9). Meters (nftables wiki) suggests nftables >= 0.8.1 and kernel >= 4.3.
Upgrade nftables. Eg, on Debian 9, using the stretch-backports (stretch-backports, not buster-backports) version 0.9.0-1~bpo9+1, sorry you'll have to search how to do that on other distributions.
using the wrong table, as told by the command (when using nftables 0.9.2):
# nft add rule ip filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter accept
Error: No such file or directory; did you mean set ‘rate_limit’ in table inet ‘filter’?Indeed, many objects are local to the table where they are declared. So you can't declare it in the inet filter "namespace" and use it in the ip filter "namespace". That's a difference with for example iptables + ipset, where the same ipset set can be used in any table.
This will work (once you get a recent enough nftables):
nft add rule inet filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter acceptOr alternatively you can move back the meter definition to the ip filter table. |
I have below nftable rule to add a connection rate meter:
nft add rule ip filter input tcp dport @rate_limit meter syn4-meter \{ ip saddr . tcp dport timeout 5m limit rate 20/minute \} counter acceptIt generates the error:
Error: syntax error, unexpected saddr, expecting comma or '}'
add rule ip filter input tcp dport @rate_limit ct state new meter syn4-meter { ip saddr . tcp dport timeout 5m limit rate 20/minute } counter accept
^^^^^nftables ruleset
table ip filter {
chain input {
type filter hook input priority 0; policy accept;
}
}
table inet filter {
set rate_limit {
type inet_service
size 50
} chain input {
type filter hook input priority 0; policy accept;
}
}Initially I tried just inet but due to the error I added ip to see if it make any difference to no success. Any pointers?
| nftables meter error: syntax error, unexpected saddr, expecting comma or '}' |
Let's look at a part of the Packet flow in Netfilter and General Networking schematic. It was made for iptables but most of it applies for nftables:It's documented that the nat table is consulted only for packets in conntrack state NEW: packets starting a new flow.
Routed/forwarded traffic arrives from the nat/prerouting hook: that's where new flows will have a chance to be NAT-ed. OP handled this case.
Locally initiated packets (created at the local process bubble in the center) first traverse the nat/output hook, then their answer will come back as usual through the nat/prerouting hook. Leaving aside the fact that the destination is already not changed for the query, as the answer matches the flow that was created before, it's not a packet in NEW state anymore: the nat/prerouting hook will never be consulted for such traffic because it's too late: the only place to do NAT was nat/output.
So for this case where both routed and locally initiated packets should receive the same alteration, rules in nat/prerouting have to be duplicated in nat/output and usually slightly adapted to match the different case.
The adaptation here is about the host reaching itself, so for the routing case where the interface is the loopback (lo) interface, thus adding oif lo to it. Without this filter, any query from the host to anywhere using port 8088 would be redirected to the container, while only the case for the host to itself is intended.
Adding this chain in the already existing ip myportforwarding table will handle it:
chain output {
type nat hook output priority dstnat; policy accept;
tcp dport 8088 oif "lo" dnat to 10.0.3.230
}For the little details: a change from nat/output triggers the reroute check part, where the routing stack is told to reconsider the previous routing decision (output interface lo). After reroute check the output interface becomes lxcbr0.
|
I have a machine that serves both as a router and a server. I have several lxc containers on this machine, and want to expose them to both the LAN and WAN. Following https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-configuring_port_forwarding_using_nftables I was able to successfully access the servers from both WAN and LAN machines, but not localhost/the router-server itself!
Here is the configuration that partially works:
# Created from lxc-net in debian
table inet lxc {
chain input {
type filter hook input priority filter; policy accept;
iifname "lxcbr0" udp dport { 53, 67 } accept
iifname "lxcbr0" tcp dport { 53, 67 } accept
} chain forward {
type filter hook forward priority filter; policy accept;
iifname "lxcbr0" accept
oifname "lxcbr0" accept
}
}
# Created from lxc-net in debian
table ip lxc {
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 10.0.3.0/24 ip daddr != 10.0.3.0/24 counter packets 51 bytes 3745 masquerade
}
}# This is what I added
table ip myportforwarding {
chain prerouting {
type nat hook prerouting priority dstnat; policy accept;
tcp dport 8088 dnat to 10.0.3.230
} chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
ip daddr 10.0.3.230 masquerade
}
}I tried several options from this answer: How to configure port forwarding with nftables for a Minecraft server on Raspberry Pi?
Nothing seemed to work to enable local access to the services on 8088.
Looking at wireshark, access from LAN looks like:
192.168.1.105 -> 192.168.1.1 SYN
10.0.3.1 -> 10.0.3.230 SYN
...Access from the same machine:
192.168.1.1 -> 192.168.1.1 SYN
192.168.1.1 <- 192.168.1.1 FIN!I'm not too familar with nft or iptables, so I'm sure there is something I'm missing
| nft port forwarding not working on router |
After some internal help from my org, there are certain rules added elsewhere where marking is happening on packets, which I cannot share on open forum. Based on that the packets were skipping the rate limit rule. I resolved that issue by setting proper marks.
|
I have a wifi router where the wlan0 interface (radio interface) is bridged with the ethernet interface eth0 (connected to another server acting as DHCP)
/ # brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.bce67c4d8fb0 no eth0
wlan0Linux Kernel version: 4.4.60
nftables version: v0.9.6
I was trying to setup a rule in ip6 family to rate-limit ipv6 traffic. Here is my rule:
/ # nft list chain ip6 ngadre_rate_limiting ngadre_counter
table ip6 ngadre_rate_limiting {
chain ngadre_counter {
type filter hook prerouting priority raw; policy accept;
limit rate 625 kbytes/second counter packets 1945 bytes 609744 accept
counter packets 0 bytes 0 drop
}
}When I run sudo ping6 2006:db8:0:f101::10 -i 0.1 the rule is hit and counter are incrementing. But when I run iperf3 -V -c 2006:db8:0:f101::10 -p5678 -i1 -tinf, the rule is not hit counters are not incrementing and traffic is not rate limited.
I applied the rule following this wiki:
nft add rule filter input limit rate 10 mbytes/second acceptAm I missing something in the ip6 rule ?
UPDATE 1: I tried with limit rate 2/second and tried fast ping6 -i 0.1, here rule is being hit and drop counters incrementing, also ping6 is being rate limited to 2 packets per second. THat means with icmpv6 there doesn't seem any issue with limit rate, but when I try iperf3, this rule does not even hit and drop counters also not incrementing.
| nftables does not limit ipv6 traffic in rate limit rule in bridge and ip6 family |
Looking at the NFTables configuration you have the line nft add chain nat PROXY { type nat hook prerouting priority -1\; } which includes a prerouting hook which I can not find in your IPTables configuration (iptables -t nat -N PROXY).
Because there is a hook it is not possible to make a jump to it.
This configuration without the hook should work:
nft add table natnft add chain nat PROXY
nft add chain nat OUTPUT { type nat hook output priority -1\; }nft add rule nat PROXY ip daddr { 1.1.1.1/32, 1.0.0.1/32 } return
nft add rule nat PROXY ip daddr { 1.0.0.0/8 } ip protocol { tcp, udp } redirect to :3127nft add rule nat OUTPUT ip daddr { 1.0.0.0/8 } ip protocol { tcp, udp } jump PROXYAs you are migrating from IPTables to NFTables I would strongly advice to take a look at the Atomic rule replacement and the native scripting environment (/etc/nftables.conf) NFTables offers.
|
I trying to migrate from iptables script to nftables. In this script I want to redirect some outgoing tcp/udp traffic to local proxy excluding some subnets. iptables script works as expected (ip's are changed):
iptables -t nat -N PROXYiptables -t nat -A PROXY -d 1.1.1.1/32 -j RETURN
iptables -t nat -A PROXY -d 1.0.0.1/32 -j RETURNiptables -t nat -A PROXY -p tcp -d 1.0.0.0/8 -j REDIRECT --to-ports 3127
iptables -t nat -A PROXY -p udp -d 1.0.0.0/8 -j REDIRECT --to-ports 3127iptables -t nat -A OUTPUT -p tcp -d 1.0.0.0/8 -j PROXYBut adapted nftables script gives an error:
Error: Could not process rule: Operation not supported
add rule nat OUTPUT ip daddr { 1.0.0.0/8 } ip protocol { tcp, udp } jump PROXY
^^^^^Here is nftables script I want to adapt:
nft add table natnft add chain nat PROXY { type nat hook prerouting priority -1\; }
nft add chain nat OUTPUT { type nat hook output priority -1\; }nft add rule nat PROXY ip daddr { 1.1.1.1/32, 1.0.0.1/32 } return
nft add rule nat PROXY ip daddr { 1.0.0.0/8 } ip protocol { tcp, udp } redirect to :3127nft add rule nat OUTPUT ip daddr { 1.0.0.0/8 } ip protocol { tcp, udp } jump PROXYSome notes:all tables/chains/ruleset are flushed/deleted before running each script
running lsmod | grep ^nf shows all kernel modules are loaded (afaik)
everything is executed by rootThank you.
EDIT:
nft list ruleset gives this result:
table ip nat {
chain PROXY {
type nat hook prerouting priority filter - 1; policy accept;
ip daddr { 1.0.0.1, 1.1.1.1 } return
ip daddr 1.0.0.0/8 ip protocol { tcp, udp } redirect to :3127
} chain OUTPUT {
type nat hook output priority filter - 1; policy accept;
}
}The last rule is not appended because of error.
uname -a: Linux honeypot 6.1.7-1-MANJARO #1 SMP PREEMPT_DYNAMIC Wed Jan 18 22:33:03 UTC 2023 x86_64 GNU/Linux
| nftables error when jumping to `output` chain |
There are a few key sentences that I first want to point out from the wiki:You can select which flows you want to offload through the flow
expression from the forward chain.Flows are offloaded after the state is created. That means that
usually the first reply packet will create the flowtable entry. A
firewall rule to accept the initial traffic is required. The flow
expression on the forward chain must match the return traffic of the
initial connection.Getting even more specific, it points out:This also means if you are using special ip rules, you need to make
sure that they match the reply packet traffic as well as the original
traffic.I found in my experiences with it, working out reply direction, estab or not, initial etc was just too hard.
Also from their example, pay close attention to their comments; the flow statement offloads established, and further down you must accept the initial connection(s).
Offloading in general is a good thing IMO, so I just use a broad flow statement like this:
ip6 nexthdr { tcp, udp } flow add @f counter
ip protocol { tcp, udp } flow add @f counterAnd then I allow out and in as normal:
ct state established,related counter accept
iif $DEV_INSIDE counter accept
iif $DEV_OUTSIDE ip6 daddr <mail server> tcp dport { smtp } counter acceptThe flow statement definitely does not just accept everything, although it looks that way. Look at their ASCII diagram above and it should make more sense.
Pablo Neira Ayuso put it a slightly different way:The 'flow offload' action adds the flow entry once the flow is in
established state, according to the connection tracking definition, ie.
we have seen traffic in both directions. Therefore, only initial packets
of the flow follow the classic forwarding path.So in summary, yes, you could get more specific in the flow add statements, but why bother when ultimately, flows follow ESTAB state, and you can maintain protection with normal, stateful rules elsewhere.
This resource is also quite in-depth and awesome.
I would go with your first example, and just double-check with an nmap scan. Also note that you can then check for success using conntrack -L.
|
Flowtables is an nftables feature for offloading traffic to a "fast path" that skips the typical forwarding path once a connection is established. Two things need to be configured to set up flowtables. First is the flowtable itself, which is defined as part of a table. Second is a flow offload statement, and this is what my question is about. The nft man page says:
FLOW STATEMENT
A flow statement allows us to select what flows you want to accelerate
forwarding through layer 3 network stack bypass. You have to specify
the flowtable name where you want to offload this flow. flow add @flowtableThe wiki page includes this full configuration example:
table inet x { flowtable f {
hook ingress priority 0; devices = { eth0, eth1 };
} chain forward {
type filter hook forward priority 0; policy drop; # offload established connections
ip protocol { tcp, udp } flow offload @f
ip6 nexthdr { tcp, udp } flow offload @f
counter packets 0 bytes 0 # established/related connections
ct state established,related counter accept # allow initial connection
ip protocol { tcp, udp } accept
ip6 nexthdr { tcp, udp } accept
}
}Here's where I get confused. The example uses the same condition (ip protocol { tcp, udp }) to both accept the traffic and to flow offload it. Is that because flow offload will implicitly accept a flow by adding it to the flowtable, meaning those conditions should always match? Or is it just a coincidence in this example, and the accept rule could be more restrictive?
Concretely, suppose I only want to forward SSH traffic inbound from eth0, and I want to enable flow offloading. Should I configure the forward chain like this?
chain forward {
type filter hook forward priority 0; policy drop; # offload established connections
ip protocol { tcp, udp } flow offload @f # established/related connections
ct state established,related counter accept # allow initial connection
iif eth0 tcp dport 22 accept
}Or like this? (only the flow offload rule has changed)
chain forward {
type filter hook forward priority 0; policy drop; # offload established connections
iif eth0 tcp dport 22 flow offload @f # established/related connections
ct state established,related counter accept # allow initial connection
iif eth0 tcp dport 22 accept
} | How should `flow offload` statements be configured when using flowtables? |
This can be done with nftables and netdev family with an ingress chain and a dup statement. It requires using a mark to avoid an infinite loop. Depending on the exact use case, the duplication can also probably be done on egress (since it's on the loopback interface, the duplicated egress packet will appear back as ingress) but this would require kernel >= 5.17 for support, while ingress has been available for a long time.
Requires kernel >= 4.10 (for the stateless UDP alteration correct checksum support).
# nft -f - <<'EOF'table netdev t_dup # for idempotency
delete table netdev t_dup # for idempotencytable netdev t_dup {
chain c_ingress {
type filter hook ingress device "lo" priority filter; policy accept;
iif lo udp dport 123 meta mark != 1 meta mark set 1 dup to lo udp dport set 456
}
}EOFThe candidate packet is checked for a mark, and processed only if there's no mark, starting by setting a mark: this will prevent a loop later.It's duplicated. The mark, being part of the duplicated sk_buff, is also duplicated
The duplicate will actually be the unchanged packet: sent to the same place (lo) and also to port 123the dup statement, contrary to any iptables' target, including its TEE target, isn't a rule terminating statement. Rule continues with a stateless alteration of the packet: UDP port is changed to 456The duplicated packet also arrives in ingress but as it's marked the rule ignores it: loop is preventedThe duplicated port can be tested with socat:
socat -u udp4-recv:456,bind=127.0.0.1 -Notes:If nothing is listening on the duplicated port, an ICMP port unreachable will be emitted, but since this port (456) doesn't match the port the sending application is sending to (123), it will be ignored by the network stack.Netfilter's ingress happening after AF_PACKET, tcpdump will not capture the changed port nor the duplicated packet. |
I would like to mirror only a very limited packet traffic destined to a single IP and UDP port to the same IP but a second port. I've locked at https://superuser.com/questions/1593995/iptables-nftables-forward-udp-data-to-multiple-targets but it seems as if nftable's dup statement only allows duplicating to another IP but not the same IP and different port.
For instance, traffic to 127.0.0.1 port 123 should be duplicated to 127.0.0.1 port 456. As this is the only single duplication I don't have any problems with the original port number getting lost. Now I wonder if this kind of duplication is possible at all because 127.0.0.1 is not a "drain"/outgoing interface but the duplicate's final destination. Might there be some way to combine this with DNAT?
Is there some other mechanism available short of attaching an eBPF probe to the netdev with the incoming UDP traffic?
| nftables: duplicate UDP packets for specific destination IP:port to a (second) destination IP:port |
IPv4 over Ethernet relies on ARP to resolve the Ethernet MAC address of the peer in order to send later unicast packets to it.
As you're filtering any ARP request since there's no exception for it, those requests can't succeed and after a typical 3s timeout you'll get the standard "No route to host". There won't be any IPv4 packet sent from 192.168.1.10 to 192.168.1.1 or from 192.168.1.1 to 192.168.1.10, since the previous step failed.
So add this rule for now, and see later how to fine-tune it if you really need:
nft add rule bridge vmbrfilter forward ether type arp acceptIf the bridge is VLAN aware (vlan_filtering=1) or probably even if not (ie: a bridge manipulating frames and not really knowing more about them, which is probably not good if two frames from two different VLANs have the same MAC address) then here's a rule to allow ARP packets within VLAN tagged frames:
nft add rule bridge vmbrfilter forward ether type vlan vlan type arp acceptBut anyway, IP will have the same kind of problem without adaptation. This requires more information of the VLAN setup.
Here's a ruleset allowing tagged and non tagged frames alike, requiring duplication of rules. ARP having no further expression to filter it thus auto-selecting the protocol/type, it requires an explicit vlan type arp.
table bridge vmbrfilter # for idempotency
delete table bridge vmbrfilter # for idempotencytable bridge vmbrfilter {
chain forward {
type filter hook forward priority -100; policy drop;
ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept
ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept
ether type arp accept ether type vlan ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept
ether type vlan ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept
ether type vlan vlan type arp accept
}
}Also older versions of nftables (eg: OP's 0.9.0) might omit mandatory filter expressions in the output when they don't have additional filters (eg, but not present in this answer: ether type vlan arp htype 1 (display truncated) vs vlan id 10 arp htype 1) , so their output should not be reused as input in configuration files. One can still tell the difference and know the additional filter expression is there by using nft -a --debug=netlink list ruleset.
As far as I know there's no support yet for arbitrary encapsulation/decapsulation of protocols in nftables, so duplication of rules appears unavoidable (just look at the bytecode to see how same fields are looked up for the VLAN and non-VLAN cases: different offset).
|
I'd like to take a default drop approach to my firewall rules. I've created some rules for testing purposes:
table bridge vmbrfilter {
chain forward {
type filter hook forward priority -100; policy drop;
ip saddr 192.168.1.10 ip daddr 192.168.1.1 accept;
ip saddr 192.168.1.1 ip daddr 192.168.1.10 accept;
}
}However traffic between 192.168.1.1 and 192.168.1.10 is still blocked. To see if it is a syntax issue, I tried:
table bridge vmbrfilter {
chain forward {
type filter hook forward priority -100; policy accept;
ip saddr 192.168.1.10 ip daddr 192.168.1.1 drop;
ip saddr 192.168.1.1 ip daddr 192.168.1.10 drop;
}
}This however succeeds in blocking traffic between the two IPs. So I don't have a clue as to why my accept rules aren't being hit. The nftables wiki says:The drop verdict means that the packet is discarded if the packet
reaches the end of the base chain.But I literally have accept rules in my chain which should be matching.
Have I not understood something correctly? Thanks in advance for any help.
Update: A.B's ARP rule suggestion is helping. However I've discovered that my VLAN tagging is causing issues with my firewall rules. The ARP rule allows tagged traffic in through the physical NIC, the ARP replies are making it over the bridge but get blocked on exit from the physical NIC.
| Nftables default drop chain problem |
Here's a sample of the Packet flow in Netfilter and General Networking which stays valid for nftables:There's an important detail written: * "nat" table only consulted for "NEW" connections.
For a locally initiated connection, the first packet of the new connection creates a NEW conntrack state during output (the output's conntrack box). When this connection is looped back (in almost all cases through the lo interface) this same conntrack state is matched when the packet comes back through prerouting (the other conntrack box): there aren't two conntrack states for a local looped back connection.
For other cases the first packet of a new connection arrives through prerouting (the prerouting's conntrack box) and leads to the creation of a NEW conntrack state.
So when the first looped-back packet of the connection is sent, a NEW conntrack entry is created during output and when this packet arrives back through prerouting it's matching an existing conntrack entry: not a NEW state anymore. NAT hooks (that means the nat/prerouting chain destination-nat) won't be traversed and thus can't affect this connection: the opportunity was missed during the output step.
Instead, using the dnat statement in nat/output where there will be a NEW conntrack state will solve this case. As it would by default affect any locally initiated connection (ie even to Internet), care must be taken to only use it for connections that stay local: through the lo interface. This chain and rule will do that:
nft add chain inet firewall loopback-nat '{ type nat hook output priority -100; policy accept; }'
nft add rule inet firewall loopback-nat oif lo tcp dport 80 counter redirect to :13080You can follow the behaviour simply by running elsewhere the conntrack command in event mode:
conntrack -E -p tcp --dport 80and see the difference for a connection initiated from outside and a locally initiated looped-back one, with or without this additional chain. Remember that the nat hook will have seen only entries displayed with the [NEW] state (and will already have altered accordingly the entry displayed by the command).
|
I have set up nftables on my server and I want to redirect all traffic coming in to certain protected ports (e.g. 80) to be redirected to a higher port which is available without root permissions (e.g. 13080). I want to be able to access only 80 remotely but both ports locally.
This is the setup I came up with:
#!/usr/bin/nft -fflush rulesettable inet firewall { chain inbound {
type filter hook input priority filter; policy drop; ct status dnat counter accept # accept everything that came through destination nat (port 80) iifname lo counter accept
oifname lo counter accept ct state { established, related } counter accept
ct state invalid counter drop tcp dport 22 ct state new counter accept counter reject with icmpx type port-unreachable # reject everything else
} chain destination-nat {
type nat hook prerouting priority dstnat; policy accept
tcp dport 80 counter redirect to 13080 # redirect 80 to 13080
} chain forward {
type filter hook forward priority 0; policy drop;
} chain outbound {
type filter hook output priority 0; policy accept;
}
}However, this blocks local access to port 80 for some reason (13080 is still accessible though). Could someone please tell me what I am doing wrong?
| nftables destination nat block local access to port |
Okay so after much consternation I was able to solve my original problem and can therefore post the correct solution here for anyone who might come across this in the future. Application layer configuration is outside the scope of the original question.
Firstly, if you have ufw set up you will need to allow these rules to apply without being blocked by the firewall. This will need to be done on BOTH hosts, if they both have ufw. This is done with the steps described at https://help.ubuntu.com/lts/serverguide/firewall.html under section heading "IP Masquerading", and they are summarized below
0) edit /etc/default/ufw to set DEFAULT_FORWARD_POLICY="ACCEPT"
1) edit /etc/ufw/sysctl.conf to set net/ipv4/ip_forward=1 and net/ipv6/conf/default/forwarding=1
2) put the desired iptables rules in /etc/ufw/before.rules using ufw's syntax, starting with *nat and ending with COMMIT
3) restart ufw To redirect incoming packets, and proxy them through to Local, use the following configuration on Global:
#ensure forwarding is enabled, just for sanity's sake (for ufw sysctl.conf covers this)
sysctl -w net.ipv4.ip_forward=1 #rewrite incoming port 222 to Local:22
iptables -t nat -A PREROUTING -p tcp --dport 222 -j DNAT --to-dest <Local IP on 10.0.0.0/24 subnet>:22 #having rewritten the destination, also rewrite the source for all packets that now have a destination of Local:22
#rewriting the source means that the ACKs and other bidirectional data gets sent back to Global instead of attempting to go from Local directly to the originator
iptables -t nat -A POSTROUTING -d <Local IP on 10.0.0.0/24 subnet> -p tcp --dport 22 -j SNAT --to-source <Global IP on 10.0.0.0/24 subnet> #repeat the above for ports 80 and 443, as in the original question
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-dest <Local IP on 10.0.0.0/24 subnet>:80
iptables -t nat -A POSTROUTING -d <Local IP on 10.0.0.0/24 subnet> -p tcp --dport 80 -j SNAT --to-source <Global IP on 10.0.0.0/24 subnet> iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-dest <Local IP on 10.0.0.0/24 subnet>:443
iptables -t nat -A POSTROUTING -d <Local IP on 10.0.0.0/24 subnet> -p tcp --dport 443 -j SNAT --to-source <Global IP on 10.0.0.0/24 subnet>That covers the incoming proxy connection. For outgoing connections, we'll need the following configuration on Local:
#rewrite outgoing port 25 to Global:25, UNLESS it's meant for localhost (assumes lo is set up for ipv4, and not just ipv6)
iptables -t nat -A OUTPUT -p tcp '!' -d 127.0.0.1/32 --dport 25 -j DNAT --to-destination <Global IP on 10.0.0.0/24 subnet>:25 #having rewritten the destination, also rewrite the source for all packets that now have a destination of Global:25
iptables -t nat -A POSTROUTING -p tcp '!' -d 127.0.0.1/32 --dport 25 -j SNAT --to-source <Local IP on 10.0.0.0/24 subnet>We needed to allow localhost so that it's possible to connect to a local server on that port, as that is how SMTP and many other programs (including DNS) that do proxying at the protocol level need to operate. Therefore we forwarded everything that wasn't bound for localhost.
That's it! That's the complete configuration at this level of the stack, and it will get your packets where they need to be. Application level configuration is outside the scope of this question.
|
I am attempting to solve the following problem:
I have a system which I will henceforth refer to as "Local" that hosts a service on port 80 and port 443, and depends on sending outgoing requests on port 25. It also hosts a separate service on port 22.
I have a system which I will call "Global" that has a globally accessible static IP address and has DNS configured for it, and is capable of accepting incoming requests on port 80, 443, 25, and 222.
Local and Global are connected (via a VPN interface, if it matters) on the reserved subnet 10.0.0.0/24
I want all incoming requests on Global ports 80 and 443 to redirect to Local on ports 80 and 443 respectively.
I also want incoming requests on Global port 222 to redirect to Local on port 22 (yes, that is an intentionally different port).
In addition, I want all outgoing requests to port 25 from Local to redirect to Global on port 25.
Both Local and Global are modern linux systems with apt, iptables, nftables, and ufw available.
I have tried a variety of iptables configurations with no success.
As far as I can tell a configuration that /should/ work (but doesn't!) is as follows:
Global:
/etc/ufw/before.rules (excerpt)
*nat
:PREROUTING ACCEPT [0:0] # forward port 222 to Local:22
-A PREROUTING -p tcp --dport 222 -j DNAT --to-destination <Local IP on 10.0.0.0/24 Subnet>:22 # forward port 80 to Local:80
-A PREROUTING -p tcp --dport 80 -j DNAT --to-destination <Local IP on 10.0.0.0/24 Subnet>:80 # forward port 443 to Local:443
-A PREROUTING -p tcp --dport 80 -j DNAT --to-destination <Local IP on 10.0.0.0/24 Subnet>:443 # and forward the responses the other direction
-A POSTROUTING -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE COMMITLocal:
/etc/ufw/before.rules (excerpt)
*nat
:PREROUTING ACCEPT [0:0] # forward outgoing port 25 to Global:25
-A OUTPUT -p tcp --dport 25 -j DNAT --to-destination <Global IP on 10.0.0.0/24 Subnet>:25 COMMITI realize that incoming requests for HTTP can be routed using nginx or apache configuration, but I want a generic solution that is not protocol-dependent and could be used for ssh or other protocols, as this traffic will not necessarily always be HTTP.
Does anyone know how this can be done?
Is there some reason that this type of configuration isn't possible?
| How can I configure bidirectional protocol-independent port forwarding? |
The operation done with nft -f /path/to/new/rules is atomic: that means it's either done completely or not done at all (ie: reverted), and will affect all at once the next packet hitting the rules only once committed. It won't end (because of an error) only half-done. So if you don't delete the previous ruleset before, it behaves as expected: it atomically adds rules once more, as described in the notes of Atomic rule replacement in the wiki:Duplicate Rules: If you prepend the flush table filter line at the
very beginning of the filter-table file, you achieve atomic rule-set
replacement equivalent to what iptables-restore provides. The kernel
handles the rule commands in the file in one single transaction, so
basically the flushing and the load of the new rules happens in one
single shot. If you choose not to flush your tables then you will see
duplicate rules for each time you reloaded the config.For this you must, in the same transaction delete older rules before adding again your ruleset. The easiest (but affecting all of nftables, including iptables-nft if also used) is simply, similarly to what is described above, to prepend your ruleset /path/to/new/rules with:
flush rulesetIf you're loading different tables at different times to keep logical features separated (in nftables, a table can include any type of base chain (for the given family), it's not the direct equivalent of a table in iptables which has a fixed set of possible chain) it becomes a bit more complex because flush ruleset in one rule file would delete other tables (including iptables-nft rules if used along nftables). Then this should be done at the table level with for example (but read further before doing):
delete table inet foofollowed by the redefinition of it (table inet foo { ...). As-is, this creates an other chicken-and-egg problem: the first time this file would be read, eg at boot, the delete operation would fail, and thus everything would atomically fail as a whole because the table didn't exist. As declaring an already declared table's name is considered a no-op and thus won't fail, in the end this can be done:
table inet foo
delete table inet footable inet foo {
[...]note 1: For this to work properly in all cases kernel >= 3.18 is required, else better stick to flush ruleset.note 2: the wiki's note above suggests using for this case flush table inet foo but this should probably be avoided, because if sets are present this won't delete element in sets, leading again to adding instead of replacing elements if the elements are added by the ruleset and were changed there. It won't either allow to redefine the type/hook of a base chain. Using table inet foo + delete table inet foo doesn't have these drawbacks. Of course if one needs to keep elements in sets when reloading rules, one might ponder using flush table inet foo and adapt to the limitations of this method.In all cases you should be careful when using nft list {ruleset, table inet foo, ...} > /path/to/new/rules to dump the current rules to a rule file: it won't include any flush or delete command, and you'll have to add them back manually. You can probably use include to overcome this by keeping your "plumber" statements outside of the actual rules.
|
I'm trying to atomically replace nftables rules. Nftables's official wiki states that -f is the recommended way to achive this. However, when I run nft -f /path/to/new/rules on Debian Buster, the new rules get added to the current rules instead of replacing them, and I end up with a system enforcing both rule-sets simultaneously.
The same thing occurs when I try to reload configuration via systemd's nftables.service.
How can I get nft to discard the current rule-set while also adding a new rule-set in a single atomic operation?
| How do I replace nftables rules atomically? |