• Cloud Director 10.3.1 – SAML Certificate wont regenerate

    There is a known bug in Cloud Director 10.3.1 when you try to regenerate the SAML federation certificate. Lets run through what happens and how to resolve it!

    Below is an image which shows my certificate has expired

    Now lets click on Edit and try and then click Regenerate

    Click Ok to regenerate

    As you can see it shows that the certificate has updated. Now click Save

    Below we can see the certificate expiration has not updated correctly…

    Now lets try the same process again, click Edit and select Regenerate

    Click Ok to regenerate

    But this time instead of clicking on Save, click on Discard

    As you can see the certificate expiration has now been updated!

    This issue in the UI will be resolved in Cloud Director 10.3.3

  • Synology C2 Backup for Business

    Synology’s C2 Backup for Business is a cloud based backup platform for your on-premise or cloud based workloads. The agent based backup solution is extremely easy to setup and protect your workloads. I recently tested it out so lets run through the process from start to finish.

    For more information and to find out how to get started with a free trial click on the link below.

    Click here to find out more

    You will be prompted to login with your Synology Account. You will also be given the option to create one if you dont have one setup already.

    Once you are logged in you will be able to select your location and plan options.

    Select your location – 3 locations available – Europe – Frankfurt, North America – Seattle and APAC – Taiwan.
    Select your plan – ranges from 5TB to 200TB

    Setup your domain

    Enter your payment information. You also have the option to select Set up later.

    To continue without setting up a payment option click Ok (trial periods may vary depending on when you sign up)

    You will then be prompted to log back in and setup your C2 Encryption Key. Enter a key you would like to use and make sure you save a copy of it somewhere secure.

    Next save a copy of your recovery code by clicking Download. This can be used if you forget your C2 encryption key.

    For this example I will be protecting VMs on-premise but you also have the ability to protect Cloud App workloads, for example Microsoft 365.

    We are now logged into the C2 Backup Portal.

    The first thing we need to do is setup a user account that we are going to associate with the backup job itself. To do this we need to click on Management in the top menu and then User in the left hand column. The click on Add Users.

    You will be prompted to launch C2 Identity to create the user accounts. Click on Open C2 Identity.

    From here you have the option to import users from a database or add users manually. For this example I will create a couple of users manually.

    You will also be given the option to Send an activation email to them, Specify a Password or Activate later. The user will also be assigned a license.

    Then confirm the settings by selecting Ok.

    Now that my users have been created we can switch back to the C2 Backup Portal so that we can add them in.

    Under Management, then User select Add Users. Then select your user accounts and click Add.

    The users have now been imported into the C2 Backup Portal.

    Next select On-Premises from the top C2-Backup menu and then select Personal Computer or Physical Server, depending on which OS you wish to backup.

    Personal Computer is for your workstations or laptops running Windows 7 SP1, Windows 10 and Windows 11.
    Physical Server covers Windows Server 2008 R2 SP1, 2012, 2016 and 2019.

    For this example I have a mix of both Windows Desktops and Windows Servers that I would like to protect.
    Click on Add Device then download the agent ready to install on the VMs you would like to backup.

    NOTE – This can be installed manually or pushed out via group policy to your servers or workstations.

    The agent is around 94mb in size. Once the download has completed you can install it on the servers you wish to backup.

    Launch the installer and click Next

    Accept the license agreement and click Next

    Then click Install

    Once the install is complete click Finish to launch the application

    Now its time to configure the agent on the Windows Server. Click Let’s Start

    Next you will be prompted to login with your Synology account. Login using one of the user accounts we created earlier.

    Once logged in you will be prompted to perform a full backup. For this example I am going to select Not Now and then create a Backup Policy once I have installed the agent on each of my VMs.

    After installing the agent on the VMs that I would like to backup you will see them all listed out in the C2 backup console. Below is a mix of Windows Server 2019 and Windows Server 2022.

    Below is a list of the Windows 10 desktops that I have installed the agent on. You will notice that I have used the second user account I created earlier to manage these backups.

    Now its time for the Backup Policy setup. Select Backup Policy in the left hand menu and click on Create

    Enter a Name for the policy and configure the Backup scope. I have selected to backup the entire device. Then click Next

    On the Backup schedule drop down list select Custom

    Enter the settings you would like to use and click Next

    Select your retention settings and click Next

    Review the summary and click on Ok

    Select the servers that you would like to add to the policy and click Ok

    Click Ok to apply the backup policy

    Now lets create another policy for our Desktops. Click the Create button under the Backup Policy menu.

    Enter a Name for the policy and configure the Backup scope. Then click Next

    I am going to create another Custom schedule. Then click Next

    Set the retention and click Next

    Review the summary and click Ok

    Then select the Desktops that you would like to add to this policy and click Ok

    Click Ok to apply the backup policy

    You have now created two backup policies which will run at the scheduled time each night.

    Restoring the data is extremely easy, just select the server and then click the More button and select Download files/folders.

    Enter your encryption key and click confirm to mount the backup data.

    Select the files or folder that you would like to restore. You also have the option of selecting which version from the list on the right. Once you have selected the files you want to restore, click on the download button.

    The folder will now download as a zip file ready to be sent to the user. The restore portal is now also available for the users to directly login and perform their own restores.

    That’s it! The product itself is secure and very straight forward to use and Synology continues to add in new features to enhance its functionality!

  • Backup and Restore Cloud Director 10.3.1

    Something new that I wanted to talk about in Cloud Director 10.3.1 is the ability to take a backup of the Cloud Director appliance from the appliance management interface. This backup can then be used to restore the primary appliance. In this post I will run through how it all works!

    To get started log into the appliance – https://cloud-director-ip:5480 as the root user.

    Select Backup and click on Backup Now

    Once complete you will see the newly created backup file. These files are saved in the following location on the appliance,
    /opt/vmware/vcloud-director/data/transfer/backups

    One important thing to confirm is the current transfer share you have configured in Cloud Director. One quick way to check that is to ssh into your existing Cloud Director cell and run cat /etc/fstab to check the location it was mounted. Take note of this location.

    So my transfer share is mounted to 192.168.1.3:/vcloud in my lab.

    Now that we have created our backup and confirmed the transfer share we are using it’s time to redeploy the primary database cell.

    Switch over to vCenter and select Deploy OVF Template and then click on Upload Files and locate your Cloud Director 10.3.1 ova file. Once complete click NEXT

    Enter a name for the virtual machine and click NEXT

    Select your compute resource and click NEXT

    Review the details and click NEXT

    Accept the License Agreement and click NEXT

    Choose your deployment configuration and click NEXT. For this example I am going to use Primary – small.

    Select your datastore and click NEXT

    Assign a destination network for eth0 and eth1 and click NEXT

    Complete the deployment parameters and click NEXT. Basic examples given below.

    Review your settings and click FINISH

    Once complete power up the new Cloud Director cell and log into the appliance management interface as the root user – https://cloud-director-ip:5480

    Select Restore from Backup on the left hand menu and enter the NFS mount address we checked earlier. Then click NEXT

    Select the backup file you would like to restore and click NEXT

    Enter the NFS mount point for the transfer share and click RESTORE

    NOTE – This can be the same transfer share mount point or a new one.

    Once the restore has completed click CLOSE

    That’s it your Cloud Director cell will now be back online. From here you can deploy any required application cells.

    NOTE – If your cell comes back up and your certificate wasn’t originally referenced from the transfer share you can use the following post to replace the certificate!

    @steveonofaro

  • Replacing Certificates in Cloud Director 10.3.1

    Changing your certificates in Cloud Director has always been an interesting topic. A lot of people find the process difficult and I am sure support get a lot of calls in regards to this. Over time I hope it becomes easier for people who might not be used to working with different types of certificates. Until then here is the new process for replacing your certificates in Cloud Director 10.3.1

    If you have upgraded from earlier versions of Cloud Director and you have an existing certificate in place then it will still work fine on version 10.3.1. The issue may arise when you restore a cell from backup or if your certificate is about to expire and you want to replace it. Then you will need to follow the new process.

    In a single cell environment VMware recommend just replacing the certs in the following location,

    /opt/vmware/vcloud-director/etc/user.http.pem
    /opt/vmware/vcloud-director/etc/user.http.key
    /opt/vmware/vcloud-director/etc/user.consoleproxy.pem
    /opt/vmware/vcloud-director/etc/user.consoleproxy.key

    If you have a multi cell environment and are using wildcards then they recommend placing the new copies of your certificate files into the transfer share,

    /opt/vmware/vcloud-director/data/transfer/user.http.pem
    /opt/vmware/vcloud-director/data/transfer/user.http.key
    /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.pem
    /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.key

    In my lab I have an existing pfx file which I would like to upload to my Cloud Director environment. To do this we will need to extract the cert files out as pem and key files. You can install openssl on your laptop or management server but it comes built in to Cloud Director so in this example we will use the cloud cell.

    Using winscp copy your pfx file to your Cloud Director cell and store it in the /tmp directory.

    When I first extracted out the key and pem files it contained the bag attributes before each cert in the chain. This is something that will cause the cert replacement process to fail so it needs to be removed.

    Thankfully removing the bag attributes is straight forward and I have included the commands below to export each certificate file you will need without the attributes.

    In order to replace the certificates in Cloud Director 10.3.1 we need the following files,

    user.http.key
    user.http.pem
    user.consoleproxy.key
    user.consoleproxy.pem

    To get started SSH into your Cloud Director cell and browse to the /tmp directory. Then run the following command to extract the user.http.key,

    openssl pkcs12 -in /tmp/vcloud.pfx -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > /tmp/user.http.key

    Then run the following command to extract the user.http.pem,

    openssl pkcs12 -in /tmp/vcloud.pfx -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > /tmp/user.http.pem

    Then run the following command to extract the user.consoleproxy.key,

    openssl pkcs12 -in /tmp/vcloud.pfx -nocerts -nodes | sed -ne ‘/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p’ > /tmp/user.consoleproxy.key

    Then run the following command to extract the user.consoleproxy.pem,

    openssl pkcs12 -in /tmp/vcloud.pfx -nokeys | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > /tmp/user.consoleproxy.pem

    At this stage it is important to check the certificate chain order in your pem files. The order should show the main certificate first, then the intermediate certificate and then the Global Root CA certificate.
    If your intermediate and global root certs are around the wrong way use the vi command to edit the pem file and change the order. The command to use will be vi /tmp/user.http.pem and once you have the file open press i to insert.

    For example if your global root cert is above your intermediate then highlight the global root from –BEGIN CERTIFICATE– to –END CERTIFICATE– and then paste it below the intermediate. Then go back to the first line of the section you highlighted and press dd to delete each line. Once complete press :wq to save the changes. Your pem file should now be in the correct order. You will need to do this for user.http.pem and user.consoleproxy.pem

    My lab only has the one cell but for this example I am going to pretend my cert is a wildcard and put my new certificates on the transfer share. This will simulate the process for what you would be doing in a production environment as the cert would then be available to your other cells. If you wanted to follow this process to replace the certificate on your single cell environment then just change the destination to /opt/vmware/vcloud-director/etc/ instead.

    Run the following commands to copy the certificates to the transfer share,

    cp /tmp/user.http.pem /opt/vmware/vcloud-director/data/transfer/user.http.pem
    cp /tmp/user.http.key /opt/vmware/vcloud-director/data/transfer/user.http.key

    cp /tmp/user.consoleproxy.pem /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.pem
    cp /tmp/user.consoleproxy.key /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.key

    Next we need to check the permissions. As you can see the new cert files need to have the permissions changed to vcloud

    Run the following commands to change the permissions of the new certificate files,

    chown vcloud:vcloud user.consoleproxy.key
    chown vcloud:vcloud user.consoleproxy.pem
    chown vcloud:vcloud user.http.pem
    chown vcloud:vcloud user.http.key

    Next apply the new certificates by running the following commands,

    /opt/vmware/vcloud-director/bin/cell-management-tool certificates -j –cert /opt/vmware/vcloud-director/data/transfer/user.http.pem –key /opt/vmware/vcloud-director/data/transfer/user.http.key –key-password root-password

    /opt/vmware/vcloud-director/bin/cell-management-tool certificates -p –cert /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.pem –key /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.key –key-password root-password

    Once complete run the following command to stop the service,

    /opt/vmware/vcloud-director/bin/cell-management-tool cell -i $(service vmware-vcd pid cell) -s

    Then run the following command to start the service,

    systemctl start vmware-vcd

    NOTE – If you want to check the status run – systemctl status vmware-vcd

    NOTE – If you have any additional cells then run the following commands to update their certificates,

    /opt/vmware/vcloud-director/bin/cell-management-tool certificates -j –cert /opt/vmware/vcloud-director/data/transfer/user.http.pem –key /opt/vmware/vcloud-director/data/transfer/user.http.key –key-password root-password

    /opt/vmware/vcloud-director/bin/cell-management-tool certificates -p –cert /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.pem –key /opt/vmware/vcloud-director/data/transfer/user.consoleproxy.key –key-password root-password

    Once complete run the following command to stop the service,

    /opt/vmware/vcloud-director/bin/cell-management-tool cell -i $(service vmware-vcd pid cell) -s

    Then run the following command to start the service,

    systemctl start vmware-vcd

    Now browse to your Cloud Director url and the certificate should now be updated.

    Next you will need to update your public addresses certificate. Since we have already created the pem files use winscp to copy one of the pem files from the /tmp directory back down to your local workstation. The public addresses cert must include the certificate chain, for example the endpoint certificate, intermediate certificates, and a root certificate in the pem format without the private key.

    Login to the Cloud Provider portal, select Administration and then click on Public Addresses from the left hand menu. Click on EDIT and then select the Replace Certificate option to browse for your Web Portal replacement certificate file.

    Select Next and enable the Use Web Portal Settings option. Click Next and enter your Cloud Director console proxy url followed by port 8443, ie vcloud.domain.com:8443, then click SAVE.

    That’s it your Cloud Director 10.3.1 certificates have now been replaced!

  • Changing the site name in Cloud Director

    After deploying new or upgrading your Cloud Director environment to 10.2 you may notice the tenants site name displaying something similar to “Site name undefined (Site ID: 6d2f1162-b138-4aef-ac42-320f32b5d45c)”

    This is because the site name is not something you set during the initial setup, it has to be manually changed after deployment. When left unset it may appear as a UUID, hostname of the installations API public address or the IP of the Cell.

    To make the changes I will be using Postman. You can download a free copy from the following link

    To get started enter your Cloud Director URL and append it with /api/versions and click Send

    Example – https://vcloud.stevenonofaro.net/api/versions

    This should return a status of 200 OK. Scroll down in the body and look for a valid API version.

    Look for one of the versions listed under the value of VersionInfo deprecated=”false” as these will be the current working API versions.

    For this example I will go with the latest version 35.0

    <VersionInfo deprecated=”false”>

    <Version>35.0</Version>

    Create a new connection at the top by clicking on the + button then change the type to POST and enter your Cloud Director URL and append it with /api/sessions.

    Example – https://vcloud.stevenonofaro.net/api/sessions

    Then click on Authorization, change the type to Basic Auth and enter your username and password.

    Then click on the Headers tab, set the Key to Accept and the Value to application/*+xml;version=35.0

    Now click Send. It should then return a status of 200 OK and if you click on the Headers tab again you will see your X-VMWARE-VCLOUD-ACCESS-TOKEN.

    Copy the value as we will use this to change the authentication type.

    Now switch back to the first GET tab we created initially. Click on the Authorization tab and change the type to Bearer Token. Then paste in the token we just copied from the previous POST tab.

    Then click on Headers and set the Key to Accept and the Value to application/*+xml;version=35.0

    Now update the GET URL to the following,

    Example – https://vcloud.stevenonofaro.net/api/site/associations/localAssociationData

    Then click Send, you should get a status of 200 Ok. This way when the bearer token expires (which it does after 30 minutes) you can just go back to the POST tab and get an updated token.

    Next up we need to look through the Body and try to locate the href URL for the Cloud Director site. It will reference the UUID that is shown in the image below.

    The information we are looking for is the following,

    <Link rel=”edit” href=”https://vcloud.stevenonofaro.net/api/site/associations/6d2f1162-b138-4aef-ac42-320f32b5d45c&#8221; type=”application/vnd.vmware.admin.siteAssociation+xml”/>

    Now we need to perform a GET request using the following URL – https://vcloud.stevenonofaro.net/api/site/associations/6d2f1162-b138-4aef-ac42-320f32b5d45c

    Update the GET box and click Send.

    You should again get a status of 200 Ok.

    Note – If you get an error you may need to update your bearer token and check the headers tab to make sure the api version is still set and try again.

    Scroll down in the body and near the end you should see the following,

    <SiteName>Site name undefined (Site ID: 6d2f1162-b138-4aef-ac42-320f32b5d45c)</SiteName>

    This is the site name section that we need to update.

    Now copy the whole response and then click on Body, click the raw checkbox and change txt to xml. Then paste the entry into the box.

    Scroll down and update the site name in the following way,

    <SiteName>Perth</SiteName>

    I have just updated my site name to Perth.

    Click on the Headers tab and set a new Key to Content-type and set the Value to application/vnd.vmware.admin.siteAssociation+xml

    Then change the drop down from GET to PUT and click Send.

    This will then return and Status 200 Ok

    Now it’s time to go back to Cloud Director and login to see if our new site name is displayed.

    And from the main Data Centers view,

    You will also see the completed task when logged in as the system administrator. 

     

    As always use the subscribe box above for new post notifications and follow me on twitter @steveonofaro

     

     

     

     

     

     

  • Synology Survey Lucky Draw Event

    To be in the running to win a Synology DS920+ or other Synology Merchandise just click on the link below and complete the survey!

     

    Fill out the survey before April 16th – Click Here

     

    Some of the great prizes on offer include:

    – Synology DS920+

    – Synology insulated water bottles

    – Synology long sleeve sweaters

    – Synology key holders

    – Synology caps

     

    Prize winners will be announced on April 21st, and also contacted through DM or email by Synology.

    For more details, please kindly refer to the terms and conditions in the survey link.

     

     

     

    @steveonofaro

  • Synology Active Backup for Business

    In this post I wanted to run through configuring and using Active Backup for Business on your Synology NAS. The software is free to use and supports Windows Physical Server, Windows PC, vSphere, Hyper-V and File Server Backups. You also have the option to install Active Backup for G Suite and Active Backup for Microsoft 365. For this example, I will be running through a backup and restore of a VM from my VMware vSphere environment.

    It supports vSphere version 5.0 through to 6.7 at this point in time. You will also need to ensure that TCP ports 443 and 902 are open between the Synology NAS and the vSphere environment.

    Before you get started make sure that your DiskStation Manager (DSM) is at the latest version and the user you have logged in with belongs to the administrators group.

    Launch Package Center which is located on the main desktop

    Then locate Active Backup for Business and click Install

    The software will then be downloaded and installed. It will then be available from the main menu.

    Launch Active Backup for Business

    Select Activate to start the activation process. You will require your Synology account for this process.

    Review and accept the privacy statement. Then click Next

    Enter your Synology account details and click Activate

    Once complete click Ok

    You will then be ready to start configuring Active Backup for Business.

    For this example I will be connecting to my vSphere environment. So click on Manage Hypervisor under the VMware vSphere tab.

    Then click on Add

    Enter your vCenter server Name or IP Address, then enter a username and password to connect.

    Once complete click Apply

    Accept the certificate request that appears and once connected close the Manage Hypervisor window.

    The Active Backup for Business window will now show the ESXi hosts and VMs which are currently connected to the vCenter Server.

    Next up its time to create a backup task. Click on the Task List option.

    Click on the Create drop-down menu and select vSphere task

    With the backup destination highlighted click Next

    Note – The file system needs to be Btrfs

    For the Task Name enter a name for the backup job. Then expand your cluster and select which VMs you would like to backup.

    Then click Next.

    Once you are happy with the options you have selected click Next

    The next screen allows you to either perform a manual backup or configure your own backup schedule. For this example I am just going to create a Manual Backup.

    Once complete click Next.

    Next up are the retention policy settings. You can select to keep all version of your backups or create a policy that will keep all versions for the next 14 days as an example.

    I am just going to select Keep all versions for this backup. Once you have made your selection click Next.

    Then select which users you would like to grant restore privileges. I only have my base admin user created at this stage but it would be a good idea to create a specific restore user or group and then select it here.

    Once you have made your selection click Next.

    Review your settings and when you are ready click Apply.

    As I selected a Manual Backup I was then prompted to run the backup now. For this example I will select Yes.

    The backup has now started and can be monitored from the Task List.

    The backup has now completed successfully.

    If you click on Details you will see more information about the job itself.

    Now let’s run through a restore scenario. I could just restore the VM back to its original location in vCenter which would work perfectly fine, but I was keen to test a restore to Virtual Machine Manager on the Synology itself.

    Before we start you need to have Virtual Machine Manager installed. To do this open Package Center and then scroll down until you locate Virtual Machine Manager. Then click Install and follow the prompts.

    Once the installation is complete open Virtual Machine Manager and follow the wizard to complete the configuration. Also add in any networks that you may require for your newly restored VM from the Network tab in Virtual Machine Manager.

    Now switch back over to Active Backup for Business, select Virtual Machine from the left hand menu and then select Task List to see the last backup job.

    Highlight the Backup Task and click Restore

    Next you need to select where you would like to restore the VM. Normally this would just be back to VMware vSphere but for this example I wanted to restore the VM to Virtual Machine Manager on the Synology itself.

    Select Instant Restore to Synology Virtual Machine Manager, then click Next.

    Select the restore point you would like to restore from and click Next.

    Select the storage where the Virtual Machine will be stored and click Next.

    Check the hardware configuration and make any necessary changes and click Next.

    On the Virtual Disk configuration page click Next.

    On the network configuration page click Next.

    You will then be prompted to download the Synology Guest Tool. This is the VMware Tools equivalent and will be mounted to the VM ready to install. Click on Download to continue.

    Then click Next.

    If you have a specific account that you would like to assign permission to interact with this VM, select it here and then click Next.

    Once complete click Apply.

    The Server will then be imported from the backup job and will display in Virtual Machine Manager.

    From the Action drop-down menu you can make any changes you require to the VM prior to powering it on. For this example I have not make any changes to the VM.

    With the virtual machine highlighted select Power On to start the VM.

    Once the Status of the VM shows Running then the Connect button will no longer be greyed out.

    Click on the Connect button to interact with the VM. The Virtual Machine will now power up ready for use in a new window.

    Using the arrow tab on the left on screen you can send a ctrl + alt + del to the VM so that you will be able to login.

    Once logged in you will see that the Synology Guest Tool ISO is mounted.

    Launch the installer and click Next.

    Accept the license agreement and click Next.

    On the destination folder screen click Next.

    Then click Install.

    Once complete click Finish.

    Then select Yes to restart the VM.

    Once the VM has restarted log back in and you will be ready to use your newly restored Virtual Machine which is now running on your Synology.

    That’s it from me, I hope this helps you get started with Active Backup for Business!

     

    @steveonofaro

  • Synology DS1621xs+ Getting Started Guide

    In this post I wanted to run through setting up a new Synology DS1621xs+. Deploying a Synology at home has been on my to do list for some time now. With multiple PC’s and a lab running at home I have accumulated a fair amount of data and as always its stored in all different places. Time to start consolidating!

    The 6 bay unit has an Intel Xeon D-1527 4-core 2.2Ghz CPU and comes with 8GB of DDR4 ECC Memory which you can expand out to 32GB if required. It also has two M.2 2280 NVMe SSD slots which I will be using to setup an SSD Cache. There is also a front mounted USB 3.0 port in the lower right hand corner which will come in handy when copying data onto the NAS. For now, I have 6 x 2TB drives which I will be using during this setup example.

    On the back on the unit you can see the 2 x 1GbE ports and the 1 x 10GbE port. Unfortunately I don’t have 10GbE at home so I will just be connecting both of the 1GbE ports for now but I do like having the option. It also has 2 more USB 3.0 Ports and 2 x eSATA ports.

    I will then be installing 2 x M.2 2280 NVMe SNV3400-400G Solid State Drives. These will be used later to configure my SSD Cache.

    I would recommend removing a few of the empty drive slots to give yourself enough room to install the M.2 drives. Then the M.2 drives just click into place in the mounts provided on the inner left hand wall.

    Next up you will need to install your hard drives. Push on the base of the drive slot at the front to unclick it from the main unit. Then slide the empty drive caddy out. The outer side rails just clip off allowing you to then install your drive into the caddy. Clip the side rails back into place and slide the drive caddy back into the Synology. Then push the base of the caddy down and it will click into place. You can also lock the drives in place with the key provided.

    Now when ready you can connect the two 1Gb Ethernet ports to your switch and then connect the power cable and power on the Synology unit.

    Before we get started make sure the pc that you are going to use to configure your new NAS is on the same network as your Synology and you are able to access the internet.

    Open you browser and enter – http://find.synology.com/

    Once your Synology is located on your network click Continue.

    Click Next on the End User License Agreement.

    Click Continue on the Synology Privacy Statement.

    Then on the Web Assistant page click Set Up.

    Download the latest copy of DSM from the Synology download center.

    Select the copy of DSM that you just downloading using the Browse Button.

    Once started you will be prompted that all of the data on the disks will be removed. Tick the box and click Ok.

    The process will now commence.

    This takes around 10 minutes to complete and the NAS will be restarted during this process.

    Then enter a Server Name for your Synology and a Username and Password.

    Once complete click Next.

    Setting up QuickConnect allows you to access your Synology from outside of your home network without having to configure any port forwarding on your home router or firewall. To do this you will need to setup a Synology Account.

    For this example I will set this up after the initial setup of the unit so I have selected Skip this step.

    With the initial setup complete click on Go.

    When the Smart Update screen appears click Got It.

    You should now see your DiskStation Manager Desktop.

    Select the menu from the top left, then select Storage Manager.

    You will then see an overview of Storage Manager. You should be able to see your disks but no volumes or storage pools just yet.

    Select Storage Pool from the left hand menu and then click Create.

    You have 2 options here, Better Performance or Higher Flexibility. Better Performance supports only 1 volume and Higher Flexibility support multiple volumes. For this example I have selected Higher Flexibility. Then click Next.

    Select your RAID type based on the number of disks you have installed. For this example I am going with RAID 5.

    Once complete click Next.

    Drag each of your available hard drives over to the right.

    Then click Next.

    A warning will appear informing you that all of the data on the disks will be erased. Click Ok to continue.

    Then click Apply.

    When prompted to create a volume click Ok.

    The Storage pool should display a status of Healthy once complete.

    Once the storage pool has been created click on Volume from the left hand menu. Then click Create.

    Select the Storage Pool and click Next.

    Below is a table explaining the features of each file system. Note if you are looking to use applications like Active Backup for Business or Virtual Machine Manager then you will need to select Btrfs.

    Make your selection based on your use case and click Next.

    Set a description and the allocated size and click Next.

    Then click Apply.

    The volume will then be created and the status should appear as Healthy.

    Then click on HDD/SSD from the left hand menu. You will see all of the disks installed in the system. You will also see the NVMe Cache drives I installed earlier.

    It is now time to setup the SSD Cache. Click on SSD Cache from the left hand menu and click Create.

    Choose between Read-write cache or Read-only cache. For this example I am going with Read-write cache.

    Select both of the NVMe drives and click Next.

    Raid 1 is our only real option here so click Next.

    Then click Apply.

    Tick the check box which is informing you that all data on the SSDs will be removed and click Ok.

    Once complete the SSD Cache should show as Healthy.

    That’s it your Synology is now configured and ready to use. Close the Storage Manager window.

    Launch Package Center which is located on the desktop.

    Then browse through the packages available to install on your new Synology. A great place to start is Virtual Machine Manager, Cloud Sync and Active Backup for Business!

    As you can see the Synology DS1621xs+ is very straight forward to setup and I have been extremely impressed with how it has been performing so far. With the Xeon 4-core processor and the dual M2 NVMe Cache drives you really can’t go wrong! If you are looking at upgrade options at the moment, I would highly recommend you checkout this NAS.

    That’s it from me, I hope this has helped you get started with your new Synology DS1621xs+.

    @steveonofaro

  • Google Cloud Object Storage option now available in Veeam v11

    One of the new options available in Veeam Backup & Replication v11 is the ability to add Google Cloud Storage in as a Capacity Tier option for your Scale-Out backup repository. With each new release we are seeing the object storage options available to users grow. Whether you want to use your own on-premise solution or leverage one of the public cloud providers Veeam have you covered.

    With that said let’s first run through creating your new Google Cloud Storage Bucket.

    When you first sign up to GCP you can leverage the Free Tier program. Be sure to check the Free Tier usage limits as some of the options are only available from certain regions.

    https://cloud.google.com/free/docs/gcp-free-tier

    Your free trial also includes $300 in credit to spend in the next 90 days which is great if you are just getting to know the platform.

    I will be leveraging my Cloud Storage from the Australia-southeast1 location which is based in Sydney. This unfortunately doesn’t fall under the free tier bracket. Luckily Google provides a monthly estimate calculator which you can use when you first create your bucket.

    After logging in you select the drop-down menu from the top left and then click on Storage. As I am not using Cloud Storage from one of the US Regions (which falls under the free tier) I need to click on Enable Billing before I can create a bucket.

    Next up select your billing account from the drop-down menu and click Set Account.

    Now the Create Bucket option is no longer greyed out. Click Create Bucket.

    Enter a bucket name and then click Continue.

    Next up select where you would like to store your data. I just need a single region for this example, so I have selected the first option Region. Then select your location from the drop-down menu and click Continue.

    Currently only Standard and Nearline are supported by Veeam v11 so select one of these options. In this example I have selected Nearline. Remember which option you select here as you will need it later during the configuration in VBR.

    Then click Continue.

    Select your preferred Access Control method and click Continue. I am just keeping things simple here and have gone with Uniform.

    For the encryption settings I have just selected Google-managed key. Then click Create.

    Once complete your bucket will be created.

    Now we need to configure the Access Keys so that you can connect to the bucket from VBR. Select Settings from the left-hand menu. Again, this is my lab so I am keeping things simple here.

    Then select Interoperability.

    Scroll down and Set your default project. I have left mine as the default name.

    Then click on Create A Key.

    Take a copy of your Access key and Secret as we will need this when creating the connection from the VBR console. The access key and secret you see below have been modified.

    Next up open your Veeam Backup & Replication 11 console and click on Add Backup Repository. Then select Object Storage.

    Then click on Google Cloud Storage.

    Add in a Name and Description for this new Object Storage Repository. Then click Next.

    Click on Add and enter in your Access key and Secret Key, then click Ok.

    Once your access key has been added you can now select a gateway server to proxy access to Google Cloud Storage. 

    Then click Next.

    First up you will see your Data center region. Then select your bucket name from the drop-down menu and set the folder you would like to use.

    Select the checkbox next to Limit object storage consumption to and set your soft limit if required.

    You will also notice there is no option to make the backups immutable just yet. I am sure this is something that will come in due course, possibly by utilizing Googles retention policy settings. This is just me taking a guess, but we will see what happens.

    Then you have the option to select the nearline storage class for a lower price per GB. I created my bucket using nearline storage so I have enabled this.

    Once complete click Apply.

    Then click Finish

    That’s it! Your new Google Cloud backup repository is now connected as a repository in VBR.

    Now let’s run through adding it to my existing Scale Out Backup Repository. Right click the SOBR and select Properties.

    Select Next.

    Note – You can skip ahead and just click on Capacity Tier but I thought I would show each section.

    Then click Next.

    Click Next.

    Click on the check box next to Extend scale-out backup repository capacity with object storage. Then select the Google Cloud Object Storage repository from the drop-down menu.

    You then have the option to copy backups to object storage after they are created or to Move backups to object storage as they age out of the operational restore window.

    In this example I have selected Copy backups to object storage as soon as they are created. Once the normal backup job is complete the offload job to copy the data to the capacity tier will start.

    Once complete click Apply.

    Click Finish.

    For this example I have just created a new backup job using the newly edited Scale-Out Backup Repository.

    The backup itself will run and then once complete the Offload job will start.

    As the data is copied over the directory structure on the Google Cloud capacity extent is laid out in the following way by the VBR server,

    When the job has completed you will notice the Object Storage option in the left-hand menu. Once selected you will see the job that was recently run. You will also see how many restore points are available to restore from and the various restore options available.

     

    That’s it from me, stay tuned for more updates and features coming out with Veeam Backup & Replication v11!

     

     

  • Getting started with the Veeam v11 Native Plug-in for vCloud Director

    With the upcoming release of Veeam Backup & Replication v11 comes the new version of the Veeam Native vCloud Director Plugin. This addition has been a big hit with Service Providers as it allows them to integrate the Veeam Self-Service Portal directly into the vCloud Director interface. One new addition to the plugin that everyone will be happy to hear is available is single sign on using the tenants vCloud Director Credentials.

    Let’s go over a few things that you need to have in place first prior to installing the new plugin.

    First step is to have Veeam Backup & Replication v11 and Veeam Enterprise Manager v11 installed. Then add your vCloud Director instance to Veeam Backup & Replication. You can do this by opening the VBR console and selecting Inventory, Add Server, VMware vSphere and then select vCloud Director. Then follow the prompts to add your vCD instance in.

    Note – I am currently running Cloud Director 10.2 but I have also tested the plugin on 10.0.0, 10.1.0, 10.1.1 and 10.1.2. Now technically I should be referring to it as Cloud Director throughout this post but I just can’t bring myself to drop the “v” yet, it’s too soon.

    The next thing you will need is a Tenant Organization setup in your vCloud Director instance with a couple of VMs running. For this example we will use TENANT01 that I have setup below.

    Now let’s go through to the process to configure the Enterprise Manager Self-Service portal for use with vCloud Director and then we will setup the new Veeam v11 Plugin.

    Double click on the Veeam Backup Enterprise Manager icon on your desktop and login.

    While we are here you will also notice the new drop-down option to change the default language.

    I thought I may as well test out my German,

    As you can see once you have logged in the interface is also updated.

    Ok back to the Enterprise Manager configuration. Once logged in click on Configuration.

    From the Backup Server menu select Add to add in your Veeam Backup & Replication Server.

    Enter your Veeam Backup & Replication server details and click Ok.

    Once the collection has completed the Self-Service option will display on the left-hand menu.

    Click on Self-Service then select vCloud and click Add

    Note – If you do not have the Self-Service option available then check that you have your Service Provider Enterprise Plus license installed and try again.

    Then complete the following fields,

    Organization – Select the Cloud Director Tenant.

    Repository – Select the Backup Repository you would like to use for this tenant’s backups.

    Friendly Name – Set a friendly name for this customer. This friendly name will also be used as the repository name the tenant can see when creating a new job.

    Quota – Set a backup storage quota limit for this tenant.

    Job Scheduling – Select the permissions for job scheduling from the drop-down menu.

    Job Priority – Select between Normal and High. Default option is set to Normal.

    The newly created Tenants account will now be displayed.

    The self-service backup portal is now accessible to the customer using a dedicated URL and their existing vCloud Director login details. Here is an example of the self-service backup portal for TENANT01 – https://backup.stevenonofaro.net:9443/vCloud/TENANT01

    Having a separate portal to log into for the customers backups can be helpful but why not integrate it into the main vCloud Director interface?

    With that said let’s look at getting the new Veeam v11 Plugin for vCloud Director up and running. To get started mount the Veeam Backup & Replication v11 ISO to your management server, the plugin is located in the following directory – “Plugins\vCloud Director\plugin.zip”

    Then login to the vCloud Director provider portal, select the More drop-down menu and click on Customize Portal.

    Select the Upload button

    Then click on Select Plugin File

    Browse to the plugin location on the Veeam Backup & Replication ISO – “Plugins\vCloud Director\plugin.zip”

    Then click Open

    Once the plugin file has been selected click Next

    Set the Scope and select which Tenants you would like to publish the plugin too and click on Next

    Then select Finish

    The plugin will now be displayed below,

    Then you can either refresh the browser or log out of vCloud Director and then log back in. Next select the More drop-down menu and select Veeam Data Protection.

    Enter the URL for your Enterprise Manager Server and click Save.

    Note – For this to work correctly you will need to have installed an SSL Certificate on your Enterprise Manager Server. If you haven’t already done so install the certificate on the Enterprise Manager Server now. Then Open IIS and expand out your sites and select VeeamBackup. Then select Bindings on the right-hand menu and click on https and select Edit. On the SSL Certificate drop down menu select your SSL Cert and click Ok. Under Manage Website on the right-hand menu click on Restart.

    Next log out of the vCloud Director Provider Portal and log in as the Tenant. Below is an example of the Tenant URL – https://vcloud.stevenonofaro.net/tenant/TENANT01

    Select the More drop-down menu and click on Veeam Data Protection

    You will now be logged directly into the Self-Service Backup Portal from within the vCloud Director interface. You will also notice that the top section of the Self-Service Portal has been removed and the colour theme has been updated to match the default vCloud Director theme. The interface blends perfectly now and looks like a natural extension of vCloud Director.

    Once you have created a backup job for your tenant there is also one other bonus feature Veeam have added in with v11. If you select the Actions button next to one of your VMs you will also notice Add to Backup has been inserted into the vCloud Director VM menu system.

    Note – For Cloud Director 10.0.0, 10.1.0, 10.1.1 and 10.1.2 you will first see Veeam Data Protection in the menu, then Add to Backup

    After selecting Add to Backup you will then be prompted to select the backup job to add the VM too. Click Add to Job. The VM has now been added into the backup job without you even having to switch over to the Self-Service Portal.

    That’s it from me, stay tuned for more updates and features coming out with Veeam Backup & Replication v11!

    @steveonofaro