NSX-T Protection: Who Needs Protection?

While working with the python SDK for NSX-T, I happened to notice a parameter for objects called ‘Protection’.  Here is the definition of the parameter directly from the source code:

Protection status is one of the following: PROTECTED - the client
    who retrieved the entity is not allowed to modify it. NOT_PROTECTED
    - the client who retrieved the entity is allowed to modify it
    REQUIRE_OVERRIDE - the client who retrieved the entity is a super
    user and can modify it, but only when providing the request header
    X-Allow-Overwrite=true. UNKNOWN - the _protection field could not
    be determined for this entity.
    This attribute may be present in responses from the server, but if
    it is present in a request to server it will be ignored.

Notice the above states that this attribute is ignored if it is set in a client request.  Setting this attribute will prevent users (with the exception of enterprise admins) from deleting or editing objects in the NSX Manager.  Having this functionality could be useful, so let’s dive a bit deeper on how to use it!

Normal REST calls
To prove that the call does not work with a regular rest call, let’s try it.  Below is an object we want to create in the NSX Manager:

{
"display_name" : "TestGroup",
"_protection" : "REQUIRE_OVERRIDE"
}

We will attempt to set the _protection attribute by setting the variable in the data and sending the call to the NSX Manager via CURL.

curl -k -u admin -d @testgroup.json -X POST https://XX.XX.XX.XX/api/v1/ns-groups/ --header "Content-Type: application/json"

{
"resource_type" : "NSGroup",
"id" : "8d8fe782-7767-4b39-a26e-5af09e48a281",
"display_name" : "TestGroup",
"members" : [ ],
"member_count" : 0,
"_create_time" : 1537278850472,
"_last_modified_user" : "admin",
"_last_modified_time" : 1537278850475,
"_system_owned" : false,
"_create_user" : "admin",
"_protection" : "NOT_PROTECTED",
"_revision" : 1
}

Notice how the object in the GUI does not have a lock next to it. That is because, as stated above, a client request cannot set the protection attribute. The setting can only be done by the NSX Manager. What we need to perform this change, is a trusted ID in the NSX Manager that is allowed to set the attribute upon creation. We need a principal ID defined in the system

Create a Principal ID
Principal IDs must be created based off of a trusted certificate that is uploaded to the NSX Manager. The NSX Manager does not validate the certificate chain, so a basic self-signed certificate will do just fine here. In this example, I will be creating a cert using openssl. The NSX Manager can be very picky so use the example below as a baseline template for creating your certificate:

openssl req -newkey rsa:2048 -extensions usr_cert -nodes -keyout test.key -x509 -days 365 -out test.crt -subj "/C=US/ST=Michigan/L=Detroit/O=NSX/CN=test" -sha256

The command above generates a certificate (test.crt) and a private key (test.key). We can now upload these to the NSX Manager:

Go to System->Trust and click ‘Import’

Upload the cert and key into the NSX Manager:

You should now see the cert in the NSX Manager:

Now that the cert is uploaded, we can grab its ID so we can create a principal ID off it. The cert will be used to validate the REST calls being sent from that machine.

The example below is the payload needed to create a principal ID using the cert ID of “d5f0549d-004f-45f7-b544-97251972c46c”.

{
"name" : "testuser",
"node_id" : "testuser",
"is_protected" : "true",
"certificate_id" : "d5f0549d-004f-45f7-b544-97251972c46c",
"permission_group" : "read_write_api_users"
}

The Node_ID can be used to differentiate multiple nodes in a cluster that could make changes to the NSX Manager. This is primarily used in a VIO (VMware Integrated Openstack) environment. In this case, we make it the same as the ‘name’ attribute. Also, the ‘is_protected’ attribute MUST be set to “true”.  With this, any object created by this principal ID will be protected from other admin users.

We use the admin account to create the principal ID:
curl -k -u admin -d @principalid.json -X POST https://XX.XX.XX.XX/api/v1/trust-management/principal-identities/ --header "Content-Type: application/json"

{
"resource_type" : "PrincipalIdentity",
"id" : "eabc82a4-c5e7-4e14-84fb-21de38a898d1",
"display_name" : "testuser@testuser",
"tags" : [ ],
"certificate_id" : "d5f0549d-004f-45f7-b544-97251972c46c",
"role" : "enterprise_admin",
"name" : "testuser",
"permission_group" : "undefined",
"is_protected" : true,
"node_id" : "testuser",
"_create_time" : 1537280635914,
"_last_modified_user" : "admin",
"_last_modified_time" : 1537280635914,
"_system_owned" : false,
"_create_user" : "admin",
"_protection" : "NOT_PROTECTED",
"_revision" : 0
}

If it worked, we should have something like the output above. The NSX Manager will now show the new testuser principal ID:

Now we can test!!

Testing REST Calls
I deleted the old “testgroup” and will recreate it with the principal ID now:

{
"display_name" : "TestGroup"
}

curl -k --cert ./certs/test/test.crt --key ./certs/test/test.key -d @testgroup.json -X POST https://XX.XX.XX.XX/api/v1/ns-groups/ --header "Content-Type: application/json"

{
"resource_type" : "NSGroup",
"id" : "0ec60ce2-deea-4e74-9a0c-d867e88c16e7",
"display_name" : "TestGroup",
"members" : [ ],
"member_count" : 0,
"_create_time" : 1537281161137,
"_last_modified_user" : "testuser",
"_last_modified_time" : 1537281161140,
"_system_owned" : false,
"_create_user" : "testuser",
"_protection" : "NOT_PROTECTED",
"_revision" : 1
}

Now you are probably looking at the output above and saying “wait a minute! The _protection attribute still says “NOT_PROTECTED”!

Correct, because from the perspective of the testuser, it CAN edit that value. If we are to look at it in the NSX Manager, we will see that it has a lock symbol by it and shouldn’t be editable.

As you can see, the edit button is grayed out in the GUI and there is a lock symbol by the object. We successfully created a protected object in the NSX Manager! Let’s look at the object from the perspective of another user:

curl -k -u admin -X GET https://XX.XX.XX.XX/api/v1/ns-groups/

{
“resource_type” : “NSGroup”,
“id” : “0ec60ce2-deea-4e74-9a0c-d867e88c16e7”,
“display_name” : “TestGroup”,
“members” : [ ],
“member_count” : 0,
“_create_time” : 1537281161137,
“_last_modified_user” : “testuser”,
“_last_modified_time” : 1537281161140,
“_system_owned” : false,
“_create_user” : “testuser”,
“_protection” : “REQUIRE_OVERRIDE”,
“_revision” : 1
}

The admin account, from its perspective, can see it is a protected object and would require a special override to delete or edit it. If the user were not an enterprise admin, then the _protection attribute would say “PROTECTED”.

To override it and delete the object, we have to set a special header variable called X-Allow-Overwrite to ‘true’:

curl -k -u admin -X DELETE https://XX.XX.XX.XX/api/v1/ns-groups/0ec60ce2-deea-4e74-9a0c-d867e88c16e7 --header "X-Allow-Overwrite: true"

Now if we look in the NSX Manager, it should be gone:

All set! Hope this was helpful.

AJ

NSX-T Controller Installation on KVM

Overview

This post builds upon the my previous post named “NSX-T Manager Installation on KVM“.  In this post we will be:

  • Installing 3x NSX Controllers
  • Joining all 3 controllers to the NSX Management Plane
  • Creating a control cluster from the first NSX Controller created
  • Joining the other 2 controllers to the NSX control cluster

By the end we will have the starting point for building a NSX Fabric that Edge and compute nodes can use for overlay tunneling.

The network topology will look like the picture above once we have completed our work.  We used the 10.10.1.0/24 (Vlan 10) network as our out of band (OOB) management subnet and made the NSX Manager 10.10.1.200.  We will use the next three IP address after .200 for the controllers:

Controller 1: 10.10.1.201
Controller 2: 10.10.1.202
Controller 3: 10.10.1.203

We will be using the latest controller code (as of the time of this writing) of nsx-controller-2.0.0.0.0.6522091.qcow2.  You must have rights in order to download this code rev. from vmware’s website.

Deploy the primary controller in KVM

Source: Deploy NSX Controller

Like with the NSX Manager, we need to configure the basic settings into the image using guestfish.  If you need to revisit what the file looked like, please go back to the first blog post (link above) to review the information needed.  For now, I will just show the commands to apply the settings to the qcow2 images.

I created three guestfish config files named guestinfo, guestinfo2, and guestinfo3.  These files are applied to Controller1, Controller2 and Controller3 respectively.  The commands to execute this action are below:
guestfish --rw -i -a nsx-controller1.qcow2 upload guestinfo /config/guestinfo
guestfish --rw -i -a nsx-controller2.qcow2 upload guestinfo2 /config/guestinfo
guestfish --rw -i -a nsx-controller3.qcow2 upload guestinfo3 /config/guestinfo

NOTE
I did not specify this in the last post.  I took the original image named nsx-controller-2.0.0.0.0.6522091.qcow2 and copied a clean version to a new folder that holds the qcow2 disk images.  I like to keep things separated in my host machines, but this is not required.  I just created three new copies of the master qcow2 image and named them nsx-controller1.qcow2, nsx-controller2.qcow2, and nsx-controller3.qcow2.

Now that the images have been prepped, let’s deploy the first controller:
virt-install --import \
--name nsx-controller1 \
--ram 16348 \
--vcpus 4 \
--network network=vsMgmt,model=virtio \
--disk path=/vmstorage/NSX-T/controller/nsx-controller1.qcow2,format=qcow2 \
--graphics vnc,listen=0.0.0.0 --noautoconsole

Warning
If you get permissions issues while trying to start the VM, check to make sure SELinux is disabled.  This will require a reboot.

If everything worked as expected, the VM should be started.
virsh list

Output:
Id Name State
—————————————————-
1 nsx-manager1 running
2 nsx-controller1 running

The –graphics vnc,listen=0.0.0.0 virt-install option allows us to connect to the console of the VM via VNC.  You can use any VNC client to connect to the port specified for the VM.  You can find that by using the following command:
virsh vncdisplay <domain>

Example:
virsh vncdisplay nsx-controller1

Output:
:1

VNC ports, by default, start at 5900.  In the example above, it returned 1 so the port used is 5901.  If the KVM host is 10.10.1.10, then we need to VNC to 10.10.1.10:5901 to connect to the console of nsx-controller1.

Warning
If you have trouble connecting to the VM console, check to make sure the firewalld service is stopped on the KVM host.

Join the controller to the management plane

In order to join the first controller to the management plane, we need the api thumbprint of the NSX manager.  SSH into the NSX manager and run the command below:
get certificate api thumbprint

SSH into the controller and execute the following:
join management-plane NSX-Manager username admin thumbprint <NSX-Manager-thumbprint>

The controller will ask you for the password of the admin account.  Enter in the NSX Manager admin account password.  This should be the same as the password you use to log into the NSX Manager GUI.

If all goes well, you should see a success message on the controller

Example:
nsx-controller1> join management-plane 10.10.1.200 username admin thumbprint <NSX-Manager-thumbprint>
Password for API user:
Node successfully registered and controller restarted

We can confirm the controller is connected:
nsx-controller1> get managers
- 10.10.1.200 Connected

Initialize Control Cluster (only for the primary controller….Skip when creating the other 2 controllers)

Now that the controller is joined to the management plane, we need to initialize the control cluster in the primary controller.

nsx-controller1> set control-cluster security-model shared-secret secret <password>
Security secret successfully set on the node.

We create a password to use to join the other controllers to the control cluster.  This can be the same password you used before or a completely different one you want to make up.  Save this password…you will need it later.

nsx-controller1> initialize control-cluster
Control cluster initialization successful.

We can confirm the status of the control cluster with:
get control-cluster status verbose

Example:
nsx-controller1> get control-cluster status verbose
NSX Controller Status:

uuid: 3d843907-05fe-4dbb-a5d3-e585a43b9190
is master: false
in majority: false
This node has not yet joined the cluster.

Cluster Management Server Status:

uuid rpc address rpc port global id vpn address status
fc6e9632-2946-40f8-bf37-19c9447e0fb4 10.10.1.201 7777 1 169.254.1.1 connected

Zookeeper Ensemble Status:

Zookeeper Server IP: 169.254.1.1, reachable, ok
Zookeeper version: 3.5.1-alpha–1, built on 09/01/2017 12:29 GMT
Latency min/avg/max: 0/1/47
Received: 216
Sent: 232
Connections: 2
Outstanding: 0
Zxid: 0x10000001d
Mode: leader
Node count: 23
Connections: /169.254.1.1:35486[1](queued=0,recved=208,sent=225,sid=0x100000637390001,lop=GETD,est=1508440715972,to=40000,lcxid=0xcd,lzxid=0x10000001d,lresp=439090,llat=0,minlat=0,avglat=1,maxlat=19)
/169.254.1.1:35532[0](queued=0,recved=1,sent=0)

Next steps

Repeat the steps above for deployment of the other two controllers.  Using the template above, you should be able to deploy them and join them to the management plane.  Skip the control cluster initialization and continue below once the two other controllers are ready.

Join other controllers to control cluster

Set the controller shared secret password so it can join the control cluster.  This must be the same password you set in the primary controller from the steps above.  See…..I told you to remember that password 🙂

nsx-controller2> set control-cluster security-model shared-secret secret <password>
Security secret successfully set on the node.

Now get the second controllers api thumbprint:

nsx-controller2> get control-cluster certificate thumbprint
...

We will need this thumbprint to join the controller to the control cluster.  Now go back to your SSH session in Controller1.  You need to tell Controller1 to connect to Controller2 to have it join the control cluster:

nsx-controller1> join control-cluster 10.10.1.202 thumbprint <nsx-controller2-thumbprint>
Node 10.10.1.202 has successfully joined the control cluster. Please run 'activate control-cluster' command on the new node.

Now jump back into the SSH session of Controller2 and activate the control cluster.

nsx-controller2> activate control-cluster
Control cluster activation successful.

Repeat the above steps to join Controller3 to the control cluster.

Confirmation

nsx-controller1> get control-cluster status
uuid: 3d843907-05fe-4dbb-a5d3-e585a43b9190
is master: true
in majority: true
uuid address status
3d843907-05fe-4dbb-a5d3-e585a43b9190 10.10.1.201 active
1c790c2b-6455-4ceb-8c82-9955beca8b54 10.10.1.202 active
88b44c52-3104-4dd8-8280-8ea0106df324 10.10.1.203 active

As you can see, all three controllers (10.10.1.201, 10.10.1.202, and 10.10.1.203) are joined and active.  We should see the controllers in a green state in the NSX Manager dashboard

What’s next

Now that we have a working control cluster, we are ready to start building the fabric.  In the next post, we will create two NSX Edges, join them to the management plane, create transport zones and a list of other tasks involved with creating an NSX overlay fabric.

Thanks,

AJ

NSX-T Manager Installation on KVM

Introduction

NSX-T is the newest flavor of NSX that is available.  Previously, most people familiar with NSX have been working with NSX-V (NSX for vSphere) and while there are lots of great things about that product, it does have its limitations.  NSX-T looks to solve some of those problems as the product evolves.  I am starting to work with NSX-T and, while I am by no means an expert, my goal is to document my experience with the product so that others may see what it can do.  First, a little about the deployment I am looking to build:

NSX-T Components

1x NSX Manager (KVM)
3x NSX Controllers (KVM)
2x NSX Edges (KVM)
2x Compute Hosts (ESXi)

Network Deployment

The drawing above shows what we will have by the end of this post.  We already have a KVM host configured using Fedora 23.  We have configured two openvswitch bridges with physical ethernet uplinks going to a physical Cisco switch.  The Cisco switch has two vlans for this lab so far:

Vlan 10 – 10.10.1.0/24 – Used for lab management of virtual network infrastructre
Vlan 20 – 10.10.20.0/24 Used for TEPs that will participate in the overlay (Geneve in the case of NSX-T)

We have a network XML file that will be used to create a network entity in libvirtd. The network in libvirtd will link to the openvswitch (OVS) bridge vsMgmt.  This allows us to specify that libvirtd network in the vm’s configuration file so that when it starts up, it dynamically adds that virtual nic (VIF) to the OVS bridge.  The file looks like this:

vsMgmt.xml
<network>
<name>vsMgmt</name>
<forward mode='bridge'/>
<bridge name='vsMgmt'/>
<virtualport type='openvswitch'/>
</network>

Create network in libvirtd

Once you have the vsMgmt.xml file created with the information above, you need to define the network in libvirtd.  The steps are below:
virsh net-define vsMgmt.xml
virsh net-start vsMgmt.xml
virsh net-autostart vsMgmt.xml

The net-define command created the network in libvirtd.  net-start starts the network so that vm spun up can use them.  net-autostart ensures that the network will start up again once the libvirtd service starts up (nice to have when the host reboots)

Prepare the NSX Manager for installation

Source:  Install NSX Manager on KVM

You first have to prepare the NSX Manager qcow2 image.  Using guestfish, we set things like the NSX Manager Role, passwords, SSH, IP info, and more.  We create an XML file with all of the attributes.  Below is an example taken right from the vmware docs.  The link above will take you right to them.
<?xml version="1.0" encoding="UTF-8"?>
<Environment
xmlns="http://schemas.dmtf.org/ovf/environment/1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:oe="http://schemas.dmtf.org/ovf/environment/1">
<PropertySection>
<Property oe:key="nsx_role" oe:value="nsx-manager"/>
<Property oe:key="nsx_allowSSHRootLogin" oe:value="True"/>
<Property oe:key="nsx_cli_passwd_0" oe:value="<password>"/>
<Property oe:key="nsx_dns1_0" oe:value="192.168.110.10"/>
<Property oe:key="nsx_domain_0" oe:value="corp.local"/>
<Property oe:key="nsx_gateway_0" oe:value="192.168.110.1"/>
<Property oe:key="nsx_hostname" oe:value="nsx-manager1"/>
<Property oe:key="nsx_ip_0" oe:value="192.168.110.19"/>
<Property oe:key="nsx_isSSHEnabled" oe:value="True"/>
<Property oe:key="nsx_netmask_0" oe:value="255.255.255.0"/>
<Property oe:key="nsx_ntp_0" oe:value="192.168.110.10"/>
<Property oe:key="nsx_passwd_0" oe:value="<password>"/>
</PropertySection>
</Environment>

Make sure to edit the template above with your appropriate values.  I used 10.10.1.200/24 as the IP and subnet mask with the gateway being 10.10.1.1.  Be sure to set a password that meets the complexity standards.  IF IT DOES NOT, THEN IT WILL NOT APPLY THE PASSWORD UPON BOOT AND YOU WILL BE UNABLE TO LOG IN!

Take note of the password specs below:

  • At least eight characters

  • At least one lower-case letter

  • At least one upper-case letter

  • At least one digit

  • At least one special character

  • At least five different characters

  • No dictionary words

  • No palindromes

*** Important note ***

You MUST define the nsx_role value.  With the NSX-T unified appliance, a role is not assigned by default.  If you do not set this and start up the vm using the qcow2 image, the NSX Manager will not boot correctly.  The issue number is 1944678 and the link to the bug is here: NSX-T 2.0 Release Notes

To apply the values using guestfish, use the following command:
sudo guestfish --rw -i -a nsx-manager.qcow2 upload guestinfo /config/guestinfo

The image is now ready to use.

Create NSX Manager VM

Now that we have the image prepared, we will use libvirtd to create the VM.  I used the following values:
virt-install --import \
--name nsx-manager1 \
--ram 16348 \
--vcpus 4 \
--network network=vsMgmt,model=vmxnet3 \
--disk path=/vmstorage/NSX-T/manager/nsx-manager.qcow2,format=qcow2 \
--nographics

We create a VM that has 16 Gigs of RAM, 4 virtual CPUs, and a connection to the management network.  Notice in the network statement we specified the vsMgmt network we created earlier in the post.  This will then dynamically associate the vnet interface to the ovs management bridge named vsMgmt.

Once the command is executed, we should see the vm created:
virsh list

Output:
Id Name State
—————————————————-
1 nsx-manager1 running

We can also look at the virtual interfaces (VIF) created and what network it is connected to:
virsh domiflist nsx-manager1

Output:
Interface Type Source Model MAC
——————————————————-
vnet0 bridge vsMgmt vmxnet3 52:54:00:52:fd:19

vnet0 was dynamically created and assigned to the vsMgmt network.  We should be able to see this in OVS:
ovs-vsctl show

Output:
<–Omitted Extra–>
Bridge vsMgmt
Port “vnet0”
Interface “vnet0”

Confirmation

After about 5-10 minutes, we should be able to see a login screen for the NSX Manager.  I used 10.10.1.200, so we will navigate to https://10.10.1.200

Looks good so far…..Let’s login with the username admin and the password we specified above.  It should have been configured when we applied the values using guestfish.

 

 

 

 

If we got here, then we are good to go!  Next will be creating the controllers, joining them to the management plane, and then creating a control cluster.

Until next time,

AJ