Nano
v0.4.1 - NFS Storage Pool
2018-10-02

Hello everyone

Sorry for this delayed delivery because of recent busy works. This major update after more than a month has finally arrived.

0.4.1 implement NFS back-end storage access, a major update due to separate the computing and storage. The corruption of the Cell node no longer affects the instance data, significantly improve the availability and enable advanced functions such as failover and migration.

The introduction of the storage resource pool determines the basic storage model and usage. NFS is simple, reliable as the preferred format. VSAN/Ceph extensions will implement in future.

below is a demo of using storage, welcome to download and feedback.

nfs

Change list:

  • Add storage pool management and support NFS storage backend
  • Allows adjusting the storage location when creating and modifying a compute pool. All instance disk files and snapshots stored in the specified backend when using a storage pool.
  • When a Cell node joins the pool, it automatically mounts the backend storage. The mount status is accessible via the web portal.
  • Using the standalone Google icon package to adapt to intranet deployment.
  • When the Cell is stopped or abnormal, the instance on it marked as lost and displayed in the dashboard chart.
  • Installer adds semanage/nfs-utils RPMS
  • When the RPMs fail to install, the installer ask if continue installation, so that the user can manually repair the fault later.
  • Fixed: Session timeout cause Core proxy panic
  • Fixed: Snapshot files not deleted correctly when deleting an instance
  • Fixed: Incorrect instance number when instance deleted
  • Fixed: No response to insert media button or boot from media button when no ISO images uploaded.
  • Fixed: The Cell/Instance list page became very slow after a long-running automatic refresh.
  • Fixed: the prompt message does not output correctly If the list page of Cell has no result.