In the following article, we will go through the creation of a simple Linux cluster (2 compute nodes with 44 cores each) hosted in Microsoft Azure. To complete all the steps provided here, you will need a Microsoft Azure account with enough credits and a valid PoD key. Also, please note that the script and .sim file are attached to this article.
From a high-level perspective, there are two ways of setting up the cluster. One is through the CLI (Command Line Interface) and the other one is through the Azure CycleCloud utility. We will cover first one here.
This article is meant to complement two guides already provided by Microsoft:
Get azure-hpc package through the CLI and source it
To get started, we should get the set of scripts developed by Microsoft aimed at simplifying the setup of an HPC environment in the cloud. Let’s log into the Azure portal (https://portal.azure.com/) and open a Cloud Shell window:
In the home directory, execute the following commands:
Note: if you exit the CLI and log in again, remember to load theazure-hpc utilities again by re-executing source azurehpc/install.sh
Deploying the cluster
Before executing the first commands, we should make sure that our Azure subscription has a minimum set of services registered, otherwise, errors can come up during the deployment of the cluster. This is specially important for recently created subscriptions. Please verify that at least the following services are registered:
The first step is getting a copy of the simple_hpc_pbs file directory (which is taken from the azurehpc folder created at the beginning) and update the config.json file it contains. We can do this by using the azhpc-init command:
By executing the lines above, we copy the desired folder and update the location, type and resource_group variables through the -v option.
At this point, we may have to modify the config.json file to ensure that the cluster will be deployed successfully. Please remove the following lines from the config.json file located in the simple_hpc_pbs folder:
Finally, create the cluster by navigating to the simple_hpc_cluster folder and executing the azhpc-build command:
cd simple_hpc_pbs azhpc-build
After finishing the procedure, make sure that you can log into the cluster using $ azhpc-connect -u hpcuser headnode
Note: To log into the cluster from the Cloud Shell, make sure to execute the $ azhpc-connect -u hpcuser headnode from the simple_hpc_pbs folder.
Get the Simcenter STAR-CCM+ installer on your cluster
Now that the cluster is up and running, we need to transfer a Simcenter STAR-CCM+ installer file to it. The cluster deployed by the tutorial does not have public SSH access enabled by default. A couple of workarounds can be used for this situation:
Use another virtual machine with public SSH access enabled and transfer to it the desired Simcenter STAR-CCM+ installer from our local machine. Then, copy the installer into the cluster.
Start a remote desktop session to a virtual machine and access Support Center from here. Download the Simcenter STAR-CCM+ installer and then transfer it to the cluster.
We will quickly illustrate the first approach.
If you don’t have a machine with public SSH access yet, create one. Log into Azure portal. Go to Virtual Machines and then click on Add > Virtual Machine
Configure and create the Virtual Machine. Make sure that some type of public access is selected – in this case we are configuring an access via username and password through the SSH port (22). Finally, click on Review + Create
Wait until the Virtual Machine deploys completely and go to its information page. From here, extract the value of the Public IP address.
With a tool like WinSCP, access the VM from your local machine (where the Simcenter STAR-CCM+ installer resides). Populate the Host name field with the Public IP address value, and insert in User name and Password the values set when creating the VM
Drag and drop the installer file to the Explorer window of your VM (visible thanks to WinsSCP) and wait until the transfer finishes.
Transfer the file to the cluster by following these steps: o Go to the Azure portal and open a Cloud Shell. Don’t forget to execute source azurehpc/install.sh if this is a new Cloud Shell session o Navigate to the simple_hpc_pbs folder and log into the cluster with $ azhpc-connect -u hpcuser headnode o Execute scp at sudo level as shown in the image below
• Now, let’s log into the head node to start the install process. Without switching directories, type again the following command to get into the head node:
$ azhpc-connect -u hpcuser headnode
• Once in, navigate to the directory where your installer is. If you followed the steps above, it was placed under /mnt/resource.
In here, we can follow the standard Simcenter STAR-CCM+ installation procedure (Simcenter STAR-CCM+ Installation Guide > Installing Simcenter STAR-CCM+ > Installing Simcenter STAR-CCM+ on Linux): • Extract the installer package by executing $ tar -zxvf STAR-CCM+15.04.008_01_linux-x86_64.tar.gz
• Navigate to the folder where the extracted package is, cd starccm+_15.04.008/ • Using sudo, launch the Simcenter STAR-CCM+ installer: sudo ./STAR-CCM+15.04.008_01_linux-x86_64-2.12_gnu7.1.sh -i console -DINSTALLDIR=/apps/starccm. I have used the -DINSTALLDIR to specify my installation folder.
At this stage, you can optionally remove the starccm+_15.04.008 folder and the STAR-CCM+15.04.008_01_linux-x86_64.tar.gz file from /mnt/resource. Normally at this point you should have Simcenter STAR-CCM+ installed in your cluster. You can quickly check if that’s the case by launching a server an observing its output:
Submitting a simulation to the cluster
We are now ready to run a job in the cluster. This is done through a submission script – an example of one is attached on this article. Notice that such script has some lines that should be edited to match your configuration. For this article we have set the parameters as follows:
# parameters that can be overridden APP_INSTALL_DIR=${APP_INSTALL_DIR:-/apps} DATA_DIR=${DATA_DIR:-/data/starccm} CASE=${CASE:-OneraM6} OMPI=${OMPI:-openmpi4} STARCCM_VERSION=${STARCCM_VERSION:-15.04.008}
In other words, we will run a case named OneraM6.sim located in /data/starccm on version 15.04.008 installed on /apps. Let’s now execute the script to submit the job with the following command:
The cluster deployed in this article has two compute nodes with 44 cores each, hence the arguments select=2 and ncpus=44.
As soon as we hit enter, a folder called x.headnode gets created. If we navigate to it we can see a .txt that corresponds to the output log of the simulation:
Also at this point, you can perform a check on the cluster’s status. You should see that both nodes are busy:
When the simulation finishes, the solved .sim file will be available in the same folder as the original .sim file. So if we navigate to /apps/starccm/ we will see two files, the original one (OneraM6.sim) and the solved one (OneraM6@00297.sim):
Please note that the submission script can always be modified to fit your organization’s needs. Hopefully the one attached can work as a starting point for further customization. If you wish to know more about integrating cloud services and Simcenter STAR-CCM+, please contact your Dedicated Support Engineer and/or your cloud services provider.
User Guide Sections:
Simcenter STAR-CCM+ Installation Guide > Installing Simcenter STAR-CCM+ > Installing Simcenter STAR-CCM+ on Linux
Getting Started > Simcenter STAR-CCM+ Licensing > Power-on-Demand Licensing