gre tunnels are down after netplan apply

Bug #1812680 reported by Tomasz P.
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
netplan
New
Undecided
Unassigned
netplan.io (Ubuntu)
Confirmed
Medium
Unassigned

Bug Description

After configuring gre tunnels via netplan the tunX devices are in the state down. I need to manually bring them up in order to have connectivity with another end of the tunnel.

Netplan file:

network:
  version: 2
  renderer: networkd
  tunnels:
    tun0:
      mode: gre
      local: 1.1.1.1
      remote: 2.2.2.2
      addresses:
        - 10.7.10.50/30
    tun1:
      mode: gre
      local: 3.3.3.3
      remote: 4.4.4.4
      addresses:
        - 10.7.10.54/30

Output of ip a:
16: tun1@NONE: <POINTOPOINT,NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 1.1.1.1 peer 2.2.2.2
    inet 10.7.10.54/30 brd 10.7.10.55 scope global tun1
       valid_lft forever preferred_lft forever
17: tun0@NONE: <POINTOPOINT,NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 3.3.3.3 peer 4.4.4.4
    inet 10.7.10.50/30 brd 10.7.10.51 scope global tun0
       valid_lft forever preferred_lft forever

Netplan version: 0.95-1
Systemd: 237

Networkd config files:
 cat 10-netplan-tun0*
[NetDev]
Name=tun0
Kind=gre

[Tunnel]
Independent=true
Local=1.1.1.1
Remote=2.2.2.2

[Match]
Name=tun0

[Network]
LinkLocalAddressing=ipv6
Address=10.7.10.50/30
ConfigureWithoutCarrier=yes

Tags: gre tun tunnels
Tomasz P. (gites)
affects: netplan → netplan.io (Ubuntu)
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in netplan.io (Ubuntu):
status: New → Confirmed
Revision history for this message
Angel Abad (angelabad) wrote :

I have the same issue on my gre tunnels.

Thanks!

Changed in netplan.io (Ubuntu):
importance: Undecided → Medium
Revision history for this message
Damien Gardner (rendragnet) wrote :

Yep, confirmed here too...

Running up a couple of new nodes, and was losing my mind trying to figure out what I was doing wrong. Went back to an existing node to confirm our rollout docs were correct, and config was correct, only thing I could find was existing servers were running 0.97-0ubuntu1~18.04 of netplan.io, new servers are on 0.98.

Did a dist-upgrade on one of the existing servers, and bingo, it now has the same issue.. :(

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.