Kepler Computing Cluster
Home

Welcome!

The Kepler cluster computing facility, a high performance computing facility, plays a pivotal role in catering to diverse computational requirements of the researchers within the Department. It serves as the cornerstone of computational resource-based endeavors at the Department of Physical Sciences. Presently, the facility comprises ten CPU-only and one GPU-enabled compute nodes and a master node, which functions as a hub for connecting to various compute nodes and overseeing job management. All the nodes uses Redhat based alamalinux distribution as basic operating system (OS). The tasks of load balancing and job management for the cluster are administered by the Portable Batch System (PBS).

Request to users: We kindly request users to formally acknowledge the utilization of this facility in their academic papers, articles, reports, and presentations. Such acknowledgement will aids in garnering continued support from our institute and other funding agencies. An exemplary expression of acknowledgment is provided below:

``We acknowledge the support provided by the Kepler Computing facility, maintained by the Department of Physical Sciences, IISER Kolkata, for various computational needs.''

Moreover, we request users to take an additional step by notifying us through the form so that we can keep track of the mentions.

How to login

To access master node, please type the following.

$ ssh -X <username>@10.0.51.200



How to create account

Please send an email to the system administrator (dps.hpc@iiserkol.ac.in). Please cc to your supervisor if you are MS, RS, or PDF. Upon careful consideration, your account will be created and necessary permissions will be granted to your account.

Nodes details

Node Type No. of CPUs Total Memory
newton0 master 20 92.55 GB
newton1 compute 40 187.21 GB
newton2 compute 40 187.21 GB
newton3 compute 40 187.21 GB
newton4 compute 40 187.21 GB
newton5 compute 40 187.21 GB
newton6 compute 40 187.21 GB
newton7 compute 40 125.08 GB
newton8 compute 40 125.08 GB
newton9 compute 40 250.10 GB
newton10 compute 40 250.10 GB
newton-gpu01*compute40 187.18 GB

* The newton-gpu01 is GPU-enabled node. It has 2 devices x 80 Multiprocessors x 64 Cores.

Connection: The internode connectivity is maintained through three 1 Gbps Ethernet switches.