SEBA Tutorial

SEBA is a lightweight platform based on a variant of R-CORD. It supports a multitude of virtualized access technologies at the edge of the carrier network, including PON, G.Fast, and eventually DOCSIS and more. SEBA supports both residential access and wireless backhaul and is optimized such that traffic can run ‘fastpath’ straight through to the backbone without requiring VNF processing on a server.

R-CORD Legacy

In the post, I want to share how to build SEBA in a box (SIAB) but with a different approach from the official tutorial. The official tutorial uses a single node to build all component from the subscriber device, into BNG. In this tutorial, we split the component into 3:

  • Access including subscriber, virtual ONU, and virtual OLT & SEBA component
  • Aggregation Switch
  • BNG (its simply DHCP server)

The first thing to do is to build the topology:

Currently, we don’t care about the Mininet or BNG IP address in the management node, we only to know about SEBA IP address because the IP address is used by mininet to connected the switch, we use ubuntu 16.04 server.

SEBA NODE

  • Install some requirement
sudo apt-get update
sudo apt-get install -y software-properties-common bridge-utils httpie jq
  • Install Docker
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 0EBFCD88
sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable"
sudo apt-get update
sudo apt-get install -y "docker-ce=17.03*"
  • Install Kubernetes
sudo apt-get install -y ebtables ethtool apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat <<EOF >/tmp/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo cp /tmp/kubernetes.list /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update

sudo apt install -y kubeadm kubelet kubectl

sudo service apparmor stop
sudo service apparmor teardown
sudo update-rc.d -f apparmor remove
  • Restart the server first
  • Bootstrapping Kubernetes Cluster
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
  • Install Helm
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

cat > /tmp/helm.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: helm
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: helm
    namespace: kube-system
EOF

kubectl create -f /tmp/helm.yaml
helm init --service-account helm

helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
  • Clone SEBA/CORD Chart
mkdir -p cord
cd cord
git clone https://gerrit.opencord.org/helm-charts
  • Install Kafka
cd ~/cord/helm-charts
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install -n cord-kafka --version=0.13.3 -f examples/kafka-single.yaml incubator/kafka
# Wait for Kafka to come up
kubectl wait pod/cord-kafka-0 --for condition=Ready --timeout=180s
  • Install ONOS
helm install -n onos onos

Mininet Node

  • Install Mininet
sudo apt install mininet -y
sudo apt install python-minimal -y 
sudo modprobe openvswitch
  • Enable forwarding
sudo nano /etc/sysctl.conf
...
net.ipv4.ip_forward=1
...

sudo sysctl -p
  • Create this custom mininet script (change the controller IP address)
#!/usr/bin/python

from mininet.net import Mininet
from mininet.node import RemoteController, OVSSwitch
from mininet.cli import CLI
from mininet.link import Intf
from mininet.log import setLogLevel, info

def myNetwork():
    net = Mininet( topo=None,controller=RemoteController,switch=OVSSwitch)

    info( '*** Add Controller\n')
    net.addController('c0', ip='10.200.200.10', port=31653)

    info( '*** Add switches\n')
    s1 = net.addSwitch('s1')
    Intf( 'ens9', node=s1 )
    Intf( 'ens10', node=s1 )
    
    info( '*** Starting network\n')
    net.start()
    CLI(net)
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    myNetwork()
  • run mininet
sudo python fabric.py

SEBA NODE

  • get ONOS port mapping via kubernetes
btech@zu-seba:~/cord/helm-charts$ kubectl get svc
NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                           AGE
onos-debugger                   NodePort    10.99.147.131    <none>        5005:30555/TCP                    77m
onos-openflow                   NodePort    10.104.5.123     <none>        6653:31653/TCP                    77m
onos-ovsdb                      ClusterIP   10.98.93.72      <none>        6640/TCP                          77m
onos-ssh                        NodePort    10.100.62.23     <none>        8101:30115/TCP                    77m
onos-ui                         NodePort    10.99.144.95     <none>        8181:30120/TCP                    77m
  • open ONOS UI
http://10.200.200.10:30120/onos/ui/index.html#/topo
  • make sure the port mapping is like this, note that port 1 (ens9) in the switch is connected into SEBA node, and port 2 (ens10) is connected into BNG node

  • Install Voltha
cd ~/cord/helm-charts
helm install -n etcd-operator stable/etcd-operator --version 0.8.3
kubectl get crd | grep etcd

# After EtcdCluster CRD is in place
helm dep up voltha
helm install -n voltha -f configs/seba-ponsim.yaml voltha
  • Install Ponsim
cd ~/cord/helm-charts
helm install -n ponnet ponnet
~/cord/helm-charts/scripts/wait_for_pods.sh kube-system

helm install -n ponsimv2 ponsimv2
sudo iptables -P FORWARD ACCEPT
  • Add interface ens9 in SEBA node into pon1 bridge.
sudo ifconfig ens9 up 
sudo brctl addif pon1 ens9

btech@zu-seba:~/cord/helm-charts$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.024281cec01f       no
pon0            8000.06690b1a9024       no              vetha02ba499
                                                        vethed0bcd62
pon1            8000.224452167be7       no              ens9
                                                        veth58604cbb

echo 8 > /tmp/pon0_group_fwd_mask
sudo cp /tmp/pon0_group_fwd_mask /sys/class/net/pon0/bridge/group_fwd_mask
  • Install NEM Chart
cd ~/cord/helm-charts
helm dep update xos-core
helm install -n xos-core xos-core
helm dep update xos-profiles/seba-services
helm install -n seba-services xos-profiles/seba-services
helm dep update workflows/att-workflow
helm install -n att-workflow workflows/att-workflow -f configs/seba-ponsim.yaml
helm dep update xos-profiles/base-kubernetes
helm install -n base-kubernetes xos-profiles/base-kubernetes
kubectl get pod --all-namespaces
  • Modify ~/cord/helm-charts/xos-profiles/ponsim-pod/tosca/020-pod-olt.yaml. Update switch_datapath_id to your fabric-switch dpid and switch_port to the port number of Fabric-switch to which the Kubernetes node running OLT is connected.
    olt_device:
      type: tosca.nodes.OLTDevice
      properties:
        name: PONSIM OLT
        device_type: ponsim_olt
        host: olt.voltha.svc
        port: 50060
        switch_datapath_id: of:0000000000000001
        switch_port: "1"
  • Modify ~/cord/helm-charts/xos-profiles/ponsim-pod/tosca/030-fabric.yaml. Update ofId to Fabric-switch dpid, olt_port:portId to the port number of Fabric-switch to which the Kubernetes node running OLT is connected, and bng_port:portId and bngmapping:switch_port to the port number of Fabric-switch to which BNG/DHCP-server is connected.
      type: tosca.nodes.SwitchPort
      properties:
        portId: 1

    port#bng_port:
      type: tosca.nodes.SwitchPort
      properties:
        portId: 2

    bngmapping:  
      type: tosca.nodes.BNGPortMapping
      properties:
        s_tag: "any"
        switch_port: 2

          {
            "dhcpl2relay" : {
              "useOltUplinkForServerPktInOut" : false,
              "dhcpServerConnectPoints" : [ "of:0000000000000001/2" ]
            }
          }
  • populate configuration
helm install -n ponsim-pod xos-profiles/ponsim-pod
  • instructs the ONU to exchange untagged packets with the RG, rather than packets tagged with VLAN 0
http -a karaf:karaf POST \
    http://127.0.0.1:30120/onos/v1/configuration/org.opencord.olt.impl.Olt defaultVlan=65535

BNG NODE

  • configure Q-n-Q interface and enable DHCP services on the DHCP Server (BNG) interface connected to Fabric-switch
sudo apt-get install vlan
sudo modprobe 8021q

sudo su
ip link set ens9 up
vconfig add ens9 222
ip link set ens9.222 up
vconfig add ens9.222 111
ip link set ens9.222.111 up
ip addr add 172.18.0.10/24 dev ens9.222.111
  • Install DHCP Server
sudo apt update
sudo apt-get install dnsmasq
sudo systemctl stop dnsmasq
sudo dnsmasq --dhcp-range=172.18.0.50,172.18.0.150,12h

SEBA NODE

  • Login into RG and authenticate into the SEBA system
RG_POD=$( kubectl -n voltha get pod -l "app=rg" -o jsonpath='{.items[0].metadata.name}' )
kubectl -n voltha exec -ti $RG_POD bash
root@rg-85f97f7c98-5nswm:/# wpa_supplicant -i eth0 -Dwired -c /etc/wpa_supplicant/wpa_supplicant.conf                                                                                                              
Successfully initialized wpa_supplicant                                                                                                                                                                            
eth0: Associated with 01:80:c2:00:00:03                                                                                                                                                                            
WMM AC: Missing IEs                                                                                                                                                                                                
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started                                                                                                                                                            
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4                                                                                                                                                             
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 4 (MD5) selected                                                                                                                                                   
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
  • Hit Ctrl-C after this point to get back to the shell prompt.
  • Get IP Address
ifconfig eth0 0.0.0.0
dhclient
root@rg-85f97f7c98-5nswm:/# dhclient
mv: cannot move '/etc/resolv.conf.dhclient-new.112' to '/etc/resolv.conf': Device or resource busy
root@rg-85f97f7c98-5nswm:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 6a:47:91:eb:1b:3c  
          inet addr:172.18.0.125  Bcast:172.18.0.255  Mask:255.255.255.0
          inet6 addr: fe80::6847:91ff:feeb:1b3c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2614 errors:0 dropped:1593 overruns:0 frame:0
          TX packets:1083 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:249272 (249.2 KB)  TX bytes:124640 (124.6 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

root@rg-85f97f7c98-5nswm:/# ping 172.18.0.10
PING 172.18.0.10 (172.18.0.10) 56(84) bytes of data.
64 bytes from 172.18.0.10: icmp_seq=1 ttl=64 time=48.5 ms
64 bytes from 172.18.0.10: icmp_seq=2 ttl=64 time=40.9 ms
64 bytes from 172.18.0.10: icmp_seq=3 ttl=64 time=35.3 ms
^C
--- 172.18.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 35.349/41.615/48.529/5.402 ms
root@rg-85f97f7c98-5nswm:/# 

Okay, so the RG now can ping the DHCP node and possible to reach the internet.

This is the flow that installed in vOLT, with help of VOLTHA:

 

Comments are closed.