Quantcast
Channel: Rockstor Community Forum - Latest topics
Viewing all 1937 articles
Browse latest View live

Second nic for direct ethernet connection

$
0
0

@Sky12016 wrote:

Hello to Rockstor fans!

I am on the verge of buying a couple of 10gb nic cards in order to directly connect my rockstor server to my windows workstation with a cat6a cable. The goal is to achieve somewhat better transfer speeds. I know this will still be bottlenecked by the actual read/write speeds of my HDDs.

As it currently stands, my Intel (gigabit) nic is already connecting my Rockstor machine to my router and consequently to internet and local LAN. And my shares are accessible via SAMBA on my home network.

Any particular setup I should know about?
I was thinking to setup my new nic with a different IP and subnet and leave the gateway blank in Rockstor machine. Then copy this setup under my second network adapter in Windows.
Will this work or do I need to create routing instructions for Centos as well? And will my Rockstor shares be accessible from the Windows machine using the new network?

Regards!

Posts: 2

Participants: 2

Read full topic


Generation_errs

$
0
0

@Vyacheslav_Finyutin wrote:

hello.

generation_errs appears on / partition

[root@s-nas-003 ~]# btrfs device stats /
[/dev/sda3].write_io_errs 0
[/dev/sda3].read_io_errs 0
[/dev/sda3].flush_io_errs 0
[/dev/sda3].corruption_errs 0
[/dev/sda3].generation_errs 5

in /var/log/messages:
Jul 29 22:08:20 s-nas-003 kernel: BTRFS error (device sda3): parent transid verify failed on 714555392 wanted 1796 found 24

after starting scrub new messages in /var/log/messages:

Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 1412000: metadata leaf (level 0) in tree 258
Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 1412000: metadata leaf (level 0) in tree 258
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 0, gen 1
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): unable to fixup (regular) error at logical 714555392 on dev /dev/sda3
Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 3509152: metadata leaf (level 0) in tree 258
Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 3509152: metadata leaf (level 0) in tree 258
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 0, gen 2
Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 768704512 on dev /dev/sda3, sector 3614912: metadata leaf (level 0) in tree 2
Jul 30 00:39:19 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 768704512 on dev /dev/sda3, sector 3614912: metadata leaf (level 0) in tree 2
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 0, gen 3
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): unable to fixup (regular) error at logical 714555392 on dev /dev/sda3
Jul 30 00:39:19 s-nas-003 kernel: BTRFS error (device sda3): fixed up error at logical 768704512 on dev /dev/sda3
Jul 30 00:40:01 s-nas-003 systemd: Started Session 572 of user root.
Jul 30 00:41:29 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 1412000: metadata leaf (level 0) in tree 258
Jul 30 00:41:29 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 1412000: metadata leaf (level 0) in tree 258
Jul 30 00:41:29 s-nas-003 kernel: BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 0, gen 4
Jul 30 00:41:29 s-nas-003 kernel: BTRFS error (device sda3): unable to fixup (regular) error at logical 714555392 on dev /dev/sda3
Jul 30 00:41:29 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 3509152: metadata leaf (level 0) in tree 258
Jul 30 00:41:29 s-nas-003 kernel: BTRFS warning (device sda3): checksum/header error at logical 714555392 on dev /dev/sda3, sector 3509152: metadata leaf (level 0) in tree 258
Jul 30 00:41:29 s-nas-003 kernel: BTRFS error (device sda3): bdev /dev/sda3 errs: wr 0, rd 0, flush 0, corrupt 0, gen 5
Jul 30 00:41:29 s-nas-003 kernel: BTRFS error (device sda3): unable to fixup (regular) error at logical 714555392 on dev /dev/sda3

Posts: 3

Participants: 2

Read full topic

500 Errors cleaning Snapshots

$
0
0

@nandor wrote:

I was trolling through logs tonight and saw a bunch of these. Any ideas?

[100] (10.13.69.27) root:/opt/rockstor/var/log
$ cat rockstor.log
[29/Jul/2019 20:59:34] ERROR [storageadmin.views.rockon_helpers:317] Waited too long (300 seconds) for postgres to initialize for owncloud. giving up.
[29/Jul/2019 20:59:34] ERROR [storageadmin.views.rockon_helpers:128] ‘NoneType’ object has no attribute ‘name’
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 125, in install
generic_install)(rockon)
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 298, in owncloud_install
cmd.extend(vol_ops©)
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon_helpers.py”, line 184, in vol_ops
share_mnt = (’%s%s’ % (settings.MNT_PT, v.share.name))
AttributeError: ‘NoneType’ object has no attribute ‘name’
[29/Jul/2019 21:00:04] ERROR [scripts.scheduled_tasks.snapshot:76] Failed to delete old snapshots exceeding the maximum count(10)
[29/Jul/2019 21:00:04] ERROR [scripts.scheduled_tasks.snapshot:77] 500 Server Error: INTERNAL SERVER ERROR
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/scripts/scheduled_tasks/snapshot.py”, line 73, in delete
aw.api_call(url, data=None, calltype=‘delete’, save_error=False)
File “/opt/rockstor/src/rockstor/cli/api_wrapper.py”, line 119, in api_call
r.raise_for_status()
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/models.py”, line 638, in raise_for_status
raise http_error
HTTPError: 500 Server Error: INTERNAL SERVER ERROR
[29/Jul/2019 21:01:05] ERROR [scripts.scheduled_tasks.snapshot:76] Failed to delete old snapshots exceeding the maximum count(10)
[29/Jul/2019 21:01:05] ERROR [scripts.scheduled_tasks.snapshot:77] 500 Server Error: INTERNAL SERVER ERROR
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/scripts/scheduled_tasks/snapshot.py”, line 73, in delete
aw.api_call(url, data=None, calltype=‘delete’, save_error=False)
File “/opt/rockstor/src/rockstor/cli/api_wrapper.py”, line 119, in api_call
r.raise_for_status()
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/models.py”, line 638, in raise_for_status
raise http_error
HTTPError: 500 Server Error: INTERNAL SERVER ERROR

Posts: 1

Participants: 1

Read full topic

RockOn Installation Failing

$
0
0

@nandor wrote:

I am trying to install the OwnCloud RockOn and it keeps failing. docker logs show the following but I am not sure what I am doing wrong to get this:

[124] (10.13.69.27) root:/var/lib
$ docker logs owncloud-postgres
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/fs/nfsd’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/abi/vsyscall32’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/abi’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/debug/exception-trace’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/debug/kprobes-optimization’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/debug’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/dev/cdrom/autoclose’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/dev/cdrom/autoeject’: Operation not permitted
chown: changing ownership of ‘/var/lib/postgresql/data/btrfs/subvolumes/8379a7255d3ced1aad294a0e48fe7290d55acee3df314fd357888f36d28ca714/proc/sys/dev/cdrom/check_media’: Operation not permitted
[…]

Posts: 7

Participants: 3

Read full topic

Hao to visit the Django site admin

$
0
0

@catman wrote:

I want to visit Django site admin,but i donot know username and password。Please tell me the username and password,thank you very much

Posts: 1

Participants: 1

Read full topic

ZoneMinder issue

$
0
0

@Tzana wrote:

Hi all,

I would like to use ZoneMinder, but I have an issue with it, I dont know where are my video events.
I followed the Rockstor documentation :
http://rockstor.com/docs/docker-based-rock-ons/zoneminder.html

So I created 2 share for storage : zm-data and zm-mysql
I finish the installation and go on the ZoneMinder UI.
I go in the settings, storage, and I create a new one with the path “/config”
I set my cam with my new storage.
But, in log, when an event happen : “zma_ma : Can’t mkdir /config/1: Permission denied”

How I can solve this please ?

My ZoneMinder configuration :

The default storage (is working but not accessible in my Rockstor) :

Posts: 1

Participants: 1

Read full topic

Support.rockstor.com Down?

Unable to remove detached disk in pool

$
0
0

@magicalyak wrote:

I’ve replaced a Btrfs disk and it shows as a detached member in the pool in the UI. The disk does not appear in the Btrfs fi show and the remove command (UI) reports not enough space to remove it.

        Traceback (most recent call last):

File “/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py”, line 68, in run_for_one
self.accept(listener)
File “/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py”, line 27, in accept
client, addr = listener.accept()
File “/usr/lib64/python2.7/socket.py”, line 202, in accept
sock, addr = self._sock.accept()
error: [Errno 11] Resource temporarily unavailable

![image|690x429](upload://4QCltBDL8AHgIoN1yvZy5U45N5G.png)

[root@rocky ~]# btrfs fi show
Label: ‘rockstor_rockstor’ uuid: 3e3b17e7-2490-484f-9c8a-f402b2a51517
Total devices 1 FS bytes used 14.54GiB
devid 1 size 122.66GiB used 18.06GiB path /dev/md125

Label: ‘tv’ uuid: 76b16cb2-0f13-401d-8395-c408cfc0fdfe
Total devices 4 FS bytes used 6.11TiB
devid 1 size 3.64TiB used 3.06TiB path /dev/sdc
devid 2 size 3.64TiB used 3.06TiB path /dev/sdi
devid 3 size 3.64TiB used 3.06TiB path /dev/sdh
devid 4 size 3.64TiB used 3.06TiB path /dev/sdb

Label: ‘backup’ uuid: 8ac79908-cb09-4568-bfc8-b0fd377dcf15
Total devices 1 FS bytes used 439.69GiB
devid 1 size 3.64TiB used 479.02GiB path /dev/sdg

Label: ‘movies’ uuid: c77c9722-7a5d-458c-bb1f-c077a950771d
Total devices 4 FS bytes used 5.40TiB
devid 7 size 3.64TiB used 2.71TiB path /dev/sdm
devid 8 size 3.64TiB used 2.71TiB path /dev/sdk
devid 9 size 3.64TiB used 2.71TiB path /dev/sdj
devid 10 size 3.64TiB used 2.71TiB path /dev/sde

[root@rocky ~]# btrfs scrub status /mnt2/tv
scrub status for 76b16cb2-0f13-401d-8395-c408cfc0fdfe
scrub started at Fri Aug 2 16:40:54 2019 and finished after 08:18:55
total bytes scrubbed: 12.22TiB with 0 errors
[root@rocky ~]#

Posts: 5

Participants: 2

Read full topic


Accesing home folder from another user

$
0
0

@andySF wrote:

I have a lots of users and now we are implementing a solution from konika minolta that is called safeq. In order to be able to scan to the every home user, I need a group or user that have access through smb in home/ folders. This is because the safeq service on windows will run in context of a user that if exist on my rockstor I’ll be able to write the scan in this paths.

I created konica-scan user and I added this user to acl on the home folder and the enabling extended ACL in smb.conf using this example https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs#Samba_Extended_ACL_Support but samba will not grant me access to home folders.

What is the best sollution to resolve this problem. Thank you.

Posts: 1

Participants: 1

Read full topic

Snapshot cronjob throwing errors

$
0
0

@nandor wrote:

I finally have my production RockStor up and running. I set up outgoing email alerts and I am getting these with every snapshot:

|### (Cron Daemon) rockstor@xxx.com|4:49 PM (25 minutes ago)||

to rockstor

|

Traceback (most recent call last):
File “/opt/rockstor/bin/st-snapshot”, line 45, in
sys.exit(scripts.scheduled_tasks.snapshot.main())
File “/opt/rockstor/src/rockstor/scripts/scheduled_tasks/snapshot.py”, line 95, in main
validate_snap_meta(meta)
File “/opt/rockstor/src/rockstor/scripts/scheduled_tasks/snapshot.py”, line 39, in validate_snap_meta
if meta[‘share’].isdigit():
AttributeError: ‘int’ object has no attribute ‘isdigit’

Posts: 6

Participants: 3

Read full topic

I want to change it to chinese ,how to do?

InifiBand 40GB Network -Mellanox MLNX_OFED_LINUX -SOLUTION solve

$
0
0

@roberto0610 wrote:

Hello my Name is Roberto from Wenatchee WA USA. I’m setting up a NAS for production as a video editing work flow system. I’m to install Mellanox NIC drivers for IniniBand.
My NIC is a PCI-e 8x

Every time I try to install the 40GB NIC drivers from mellanox website I get this error.
Current operation system is not supported!
Does any know what the actual CentOS version used on this Rockstor Flavor?

[root@datrom iso]# uname -mrs
Linux 4.10.6-1.el7.elrepo.x86_64 x86_64
[root@datrom iso]# hostnamectl
Static hostname: datrom
Pretty hostname: Datrom
Icon name: computer-laptop
Chassis: laptop
Machine ID: 94e2809df7ad41fe8ca42871d1113048
Boot ID: c53158e3b46a4f198d7326461ea39de2
Operating System: Rockstor 3 (Core)
CPE OS Name: cpe:/o:rockstor:rockstor:3
Kernel: Linux 4.10.6-1.el7.elrepo.x86_64
Architecture: x86-64

How can I chage the Operarting system to say CentOS and its version or flavor instead of Rockstor 3?
Is there anyone with InfiniBand and Mellanox drivers for CentOS or linux experience that want put me into the right direction or help me on this proyect?

Posts: 17

Participants: 3

Read full topic

Root SSD died, advise on replacement best practise?

$
0
0

@Haioken wrote:

Hi All,

I need to reinstall my RS box, as the Transcend SSD hosting the root OS has decided it’s time to die.

A couple of questions regarding the current state.

  • I am running the latest Rockstor from stable branch, has the issue with M.2 SSDs for the root OS been resolved, or should I stick with Sata SSDs?
  • Once installation has been completed, if I add a second SSD to the root BTRFS pool, will this break things? I don’t want to be in this position again.
  • What’s the progress with the pending move to OpenSUSE? Is it better for me to manually install the OpenSUSE version following the dev notes thread, or would I expect to see a lot of broken if I did this?

Those who have seen me here before know I don’t mind getting my hands dirty, but I’d also prefer the system to be relatively stable and usable as my mrs uses this box for streaming.

Throw your answers at me as quickly as possible, as the box is in pieces on my bench and I’m running out to get bits shortly! :o)

Cheers.

Posts: 10

Participants: 4

Read full topic

CentOS 7.6 production clean install - InfiniBand Mellanox

$
0
0

@roberto0610 wrote:

I have a clean install of CentOs 7.6 x64.
System has 32GB RAM, Intel i7 CPU,
Adaptec RAID10 16x4TB Hard Drives
Mellanox Connectx-3 with Dual 40Gbit port and MFT drivers installed.

Since the driver can’t be installed into the original RockStor distro.
I wonder if I there is a way to install RockStor on my current CentOS clean install.

Any one that can help?

Posts: 2

Participants: 2

Read full topic

Rock-on framework implementation

$
0
0

@Flox wrote:

Note: this document will be split in several posts in order to fit within characters’ limit. Make sure to browse the second post as well for more information :wink: .

Preambule

This is a wikified post documenting how Rockstor implements the rock-on framework, and should thus be considered a live document expected to be updated as necessary. As a user documentation already exists on how to use the framework (see link below), however, this write-up intends to first summarize the overall rock-on framework concept, describe its implementation from a developer’s perspective, and centralize the discussion on its current state and future evolution. By combining these different elements into a single document, this post will thus attempt to provide an integrative view of the current state of the rock-on framework and bring together everybody’s expertise and ideas to foster a continuous, coordinated, and coherent improvement. In this context, recommendations and suggestions are more than welcome!

While many details will be provided, it is important to note that a focus will be placed on simplicity in order to provide a general description of how the different processes work and interconnect. As a result, we will not go into a line-by-line description of all steps, and only the code that directly mediates each step of interest will be shown—the rest will be masked ((...)). For details, however, we will provide the reader with a link to the full code.

For a general description of the rock-on framework, please see the related section in Rockstor’s documentation:
http://rockstor.com/docs/docker-based-rock-ons/overview.html

Implementation overview

In its essence, a rock-on corresponding to an ensemble of settings in a JSON format. Hosted on a public Github repository (rockon-registry), they contain all necessary information regarding the underlying docker containers and their configuration, as well as the rock-on’s metadata. While all parameters are stored in Rockstor’s database following a “key:value” structure, some parameters, such as volumes and environment variables, for instance, will have their value defined by the user during the rock-on install process. Indeed, these settings are surfaced in the webUI during the install process to be customized by the user. Once all these settings are filled by the user, Rockstor updates their values in the database before starting building up the docker commands to run the underlying container(s).

Once the user clicks the “Submit” button at the end of the rock-on install wizard, Rockstor fetches all parameters linked to the given rock-on and builds a corresponding docker run command for each container defined in the rock-on. Once the docker run command is built, Rockstor triggers it as a background ztask surfaced in the webUI as “Installing”. A successful rock-on install thus results in the creation and start of its underlying docker container(s) using the settings defined in the JSON file and by the user during the install wizard.

Finally, toggling ON and OFF a rock-on from the webUI simply translates into starting and stopping the underlying container(s), whereas uninstalling it will remove it/them.

Implementation details

In this section, we will describe in more detail the different aspects of the rock-ons framework and provide heavy references to the corresponding parts of Rockstor’s code. While this is not an exhaustive documentation of the underlying code, this will hopefully help serve as an illustration of Rockstor’s logic and offer starting points and guidance into the specific parts of this framework.

Main items of interest

Database

All rock-ons-related information is stored in the storageadmin database under dedicated but interconnected models (depicted in the picture below).

As you can see above, rock-ons-related models are centered around the DContainer model rather than the RockOn model. This is simply due to the fact that the container is the base unit of the docker environment and thus represents the best entity to link all information. As a rock-on can include multiple containers, however, the RockOn model is the unit used by Rockstor to surface this information to the user to interact with.

Accordingly, the RockOn model mostly includes elements surfaced to the user in the webUI:
link to code

class RockOn(models.Model):
    name = models.CharField(max_length=1024)
    description = models.CharField(max_length=2048)
    version = models.CharField(max_length=2048)
    state = models.CharField(max_length=2048)
    status = models.CharField(max_length=2048)
    link = models.CharField(max_length=1024, null=True)
    website = models.CharField(max_length=2048, null=True)
    https = models.BooleanField(default=False)
    icon = models.URLField(max_length=1024, null=True)
    ui = models.BooleanField(default=False)
    volume_add_support = models.BooleanField(default=False)
    more_info = models.CharField(max_length=4096, null=True)

The DContainer model, on the other hand, is very simple and the only critical information it stores is its name:
link to code

class DContainer(models.Model):
    rockon = models.ForeignKey(RockOn)
    dimage = models.ForeignKey(DImage)
    name = models.CharField(max_length=1024, unique=True)
    launch_order = models.IntegerField(default=1)
    # if uid is None, container's owner is not set. defaults to root.  if it's
    # -1, then owner is set to the owner of first volume, if any.  if it's an
    # integer other than -1, like 0, then owner is set to that uid.
    uid = models.IntegerField(null=True)

As noted above, this is due to the fact that the container is a base unit of the docker environment and its model thus mostly represents a hub of all specific information applied onto it. This specific information is thus stored in dedicated models keyed by container and includes:

class DImage(models.Model):
    name = models.CharField(max_length=1024)
    tag = models.CharField(max_length=1024)
    repo = models.CharField(max_length=1024)
  • DPort: list of all ports defined in the JSOn file, and their default host mapping.
    link to code
class DPort(models.Model):
    description = models.CharField(max_length=1024, null=True)
    hostp = models.IntegerField(unique=True)
    hostp_default = models.IntegerField(null=True)
    containerp = models.IntegerField()
    container = models.ForeignKey(DContainer)
    protocol = models.CharField(max_length=32, null=True)
    uiport = models.BooleanField(default=False)
    label = models.CharField(max_length=1024, null=True)
  • DVolume: List of all volumes and their share mapping. Note the uservol field enabled during post-install customization (see below), as well as its connection with the Share model, populated when the user selects a pre-defined Rockstor share from the drop-down menu at rock-on install.
    link to code
class DVolume(models.Model):
    container = models.ForeignKey(DContainer)
    share = models.ForeignKey(Share, null=True)
    dest_dir = models.CharField(max_length=1024)
    uservol = models.BooleanField(default=False)
    description = models.CharField(max_length=1024, null=True)
    min_size = models.IntegerField(null=True)
    label = models.CharField(max_length=1024, null=True)
  • DContainerLabel: Used during post-install customization only, this model stores simple key:value mappings for docker container labels.
    link to code
class DContainerLabel(models.Model):
    container = models.ForeignKey(DContainer)
    key = models.CharField(max_length=1024, null=True)
    val = models.CharField(max_length=1024, null=True)
  • DContainerEnv: This model stores simple key:value mapping for a container’s environment variable(s).
    link to code
class DContainerEnv(models.Model):
    container = models.ForeignKey(DContainer)
    key = models.CharField(max_length=1024)
    val = models.CharField(max_length=1024, null=True)
    description = models.CharField(max_length=2048, null=True)
    label = models.CharField(max_length=64, null=True)
  • DContainerDevice: This model stores simple key:value mapping for a container’s device binding.
    link to code
class DContainerDevice(models.Model):
    container = models.ForeignKey(DContainer)
    dev = models.CharField(max_length=1024, null=True)
    val = models.CharField(max_length=1024, null=True)
    description = models.CharField(max_length=2048, null=True)
    label = models.CharField(max_length=64, null=True)
  • DContainerArgs: This model stores command argument(s) to be added to the docker run command, as defined in the JSON file.
    link to code
class DContainerArgs(models.Model):
    container = models.ForeignKey(DContainer)
    name = models.CharField(max_length=1024)
    val = models.CharField(max_length=1024, blank=True)
  • DContainerLink: This model store simple source:destination mapping used for docker container links.
    link to code
class DContainerLink(models.Model):
    source = models.OneToOneField(DContainer)
    destination = models.ForeignKey(DContainer,
                                    related_name='destination_container')
    name = models.CharField(max_length=64, null=True)
  • ContainerOption: This model stores the values of specific options defined in the rock-on’s JSON file. This information is not surfaced to the user.
    link to code
class ContainerOption(models.Model):
    container = models.ForeignKey(DContainer)
    name = models.CharField(max_length=1024)
    val = models.CharField(max_length=1024, blank=True)

As you may have noticed, one last model, DCustomConfig, differs from the others as it exists at the rock-on level rather than at the container level. This model has a very specific use for the installations of two rock-ons (owncloud, and openvpn) for which dedicated scripts exist:

Files

Three main files interact with these different database models:

  • rockon.py: fetches the list of all available rock-ons in the metastores (remote and local) and fills information from JSON definition files into the database.
  • rockon_id.py: handles specific operations on a given rock-on: install, uninstall, update, start, and stop.
  • rockons.js: responsible for all UI operations, such as list listing all available and installed rock-ons, and installation wizard, and post-install customization.

In addition, related helpers are located in rockon_helpers.py.

Rock-ons catalog (list)

To get the list of available rock-ons, Rockstor uses two different sources (termed “metastores”): one remote, and one local. Both are defined in Rockstor’s django settings:
link to code

ROCKONS = {
	'remote_metastore': 'http://rockstor.com/rockons',
	'remote_root': 'root.json',
	'local_metastore': '${buildout:depdir}/rockons-metastore',
}

Remote and local metastore

The remote metastore includes a list of all rock-ons in JSON format (root.json) as well as the JSON definition file for each rock-on in the root.json list. All files can be found in the Github rock-on registry.

Rockstor first fetches information from the remote metastore, loading the JSON definition files for each value in the root.json file. It then proceeds in doing the same for the local metastore.
link to code

This procedure is triggered in rockon.py through the call to _get_available():

    def post(self, request, command=None):
                (...)
                rockons = self._get_available()
                (...)
                for r in rockons:
                    try:
                        self._create_update_meta(r, rockons[r])
                    (...)

As can be seen above, Rockstor then proceed with updating the database information for each rock-on through _create_update_meta().

Rock-on definition fetching

The entire process is taken care of by _create_update_meta().
link to code

First, a set of default values is used to create a RockOn model instance for the given rock-on. This instance is then completed and updated with the contents of the rock-on JSON definition file. This includes rock-on-related information such as its description, link to project website, version, presence of webUI, more_info section, etc…

Then, the “containers” key in the rock-on JSON definition file is parsed to list the names of all the containers included in the rock-on. Each value (container) is then parsed to extract the container’s settings and update the corresponding models in the database accordingly. While the process is fairly straightforward, there are several important points:

  • If no information regarding the docker image’s tag is found in the JSON file, the default is set to latest.
    link to code
            defaults = {'tag': c_d.get('tag', 'latest'),
                        'repo': 'na', }
            io, created = DImage.objects.get_or_create(name=c_d['image'],
                                                       defaults=defaults)
            co.dimage = io
  • When filling the default host port for a given host:container port mapping, Rockstor checks whether the host port is already set to be used by other available rock-on (installed or not). If it is indeed the case, it will simply set the default to use the next available port. This explains why the default port presented to the user during a rock-on installation wizard can differ from the one defined in its JSON file. Notably, this check is only performed within ports defined in rock-ons’ definitions and is thus not aware whether a port is actually in use by another service on the local system. This is especially important to keep in mind for users who are running other applications on their machine as conflicts can then arise.
    link to code
                    def_hostp = self._next_available_default_hostp(p_d['host_default'])  # noqa E501

Display in UI

Main file of interest: rockons.js

With all rock-on information in the database, Rockstor’s front-end can simply fetch this information and present it to the user. This is done via the Backbone collection RockOnCollection, which in turn refers to the /api/rockons call to get and return the list of all rock-ons from the RockOn model in alphabetical order:
link to code

        return RockOn.objects.filter().order_by('name')

This list is then by processed by the renderRockons function to extract the information used in the UI—such as the rock-on’s name, description, “install” button, “link to the webUI”, or status—built through the following handlebars template:
link to code

Note that the status of all rock-ons with pending operations (install, uninstall) is updated automatically every 15 sec:
link to code

        this.updateFreq = 15000;

Posts: 3

Participants: 2

Read full topic


Unable to Resize Share

$
0
0

@nandor wrote:

Everytime I try to reduce a share size by 2TB I get this error:

      Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/share.py”, line 247, in put
share_pqgroup_assign(share.pqgroup, share)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 1153, in share_pqgroup_assign
return qgroup_assign(share.qgroup, pqgroup, mnt_pt)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 1207, in qgroup_assign
raise e
CommandException: Error running a command. cmd = /usr/sbin/btrfs qgroup assign 0/259 2015/493 /mnt2//MainStorage. rc = 1. stdout = [’’]. stderr = [‘ERROR: unable to assign quota group: File exists’, ‘’]

Is that what this error is referring to?

Share size enforcement is temporarily disabled due to incomplete support in BTRFS. Until this status changes, the effective size of a Share is equal to the size of the Pool it belongs to.

Posts: 8

Participants: 2

Read full topic

Mellanox MLNX_OFED Rockstor 3 Driver && RAM-disk PAID

$
0
0

@roberto0610 wrote:

Hi this is a feature request to the Rockstor team.
Specially for the developers. I wonder if anyone of you compiling drivers to support InfiniBand and RDAM or just the ability to install the Mellanox Connext-X Drivers.

I could pay up to $399 per incident support. If you can help me with:

  1. Compile drivers and on my rigs. And maybe have this feature available as an add-on.
  2. Compile or prepare a panel to use RAM disk as a Pools Storage.

I’m offering here as a post but I may need permission from the administrators.
Please let me know if this may be the right way to request for support on this 2 features?

Posts: 2

Participants: 2

Read full topic

Kernel panic after updating to 3.9.2-48

$
0
0

@legion wrote:

The quick fix is to boot into the prior kernel, and edit /etc/sysconfig/kernel and change:

UPDATEDEFAULT=yes
to
UPDATEDEFAULT=no

Save, and reboot.

Posts: 1

Participants: 1

Read full topic

Feature request: Kubernetes CSI driver

$
0
0

@grizzly wrote:

Many storage providers, including FreeNAS, now come with a Kubernetes Container Storage Interface driver. This allows storage to be managed declaratively in kubernetes yaml files. Would be great to add Rockstor to this list.

Most implementations of this, eg FreeNAS’, are on github, so hopefully not too hard to adapt. This project’s documentation links should help get started. Would satisfy other’s calls to “link shares to persistent volumes and persistent volume claims”.

Posts: 4

Participants: 3

Read full topic

RAMdisk as a Pools Storage on RockStor box

$
0
0

@roberto0610 wrote:

RAM-Disk as high performance, close zero latency video editing cache storage. Isn’t an amazing idea. Due to its performance oriented nature, its mostly used for temporary data and I will like to give it a try as part of the Rocktor Disk/Pool solution.

I am wondering if RS box can handle ram disk as pools. If so. Them it may also be able to parallel stack.
Can you imaging 1-8 inexpensive now server blade with 512GB of ram each. Set them as server Cache Munster? -I may be flying to high.

I only have 1x old Dell Server R910. I wish I could use its RAM as cache/scratch/temp Video Editing Work Flow. My network is InfiniBand at 40GB already working.

I’m willing to use tmpfs. But at the moment I know very little.
I’m going to start with this information and to check if RS box can even see the tmpfs as tradition disk.
What is RAM disk? && How to create RAM disk?

Anybody, somebody willing to come on board?

Posts: 2

Participants: 1

Read full topic

Viewing all 1937 articles
Browse latest View live