Hey guys, good night, I apologize if this has already been covered somewhere, but I wasn’t able to find anything specific to my problem, this week I’m is facing a lot of problems with my rockstor box; I see a lot of activity with the CPU and disk and I’m is not running any process. I turned off the Rock-On Services trying to figure out whats happen and the activity continue.
When I try to restart the RockOn Service, I get this error:
Unknown internal error doing a GET to /api/disks/smart/info/2
After 15 minutes nginx fault and return a 502 Bad Gateway; them I need restart to get the access again. My principal fear is lost all my files and reading the forums I can’ t fine a thread related to my situation. My RockStore version is 3.9.1
Hello,
Seems that the NFS utils is newer on OpenSUSE and the mount points require no_subtree_check to be added on /etc/exports. Even that this is added automatically as of now, a warning appears.
How to reproduce: type on your OpenSUSE Rockstor machine: exportfs -a
How to fix: add in /etc/exports options no_subtree_check
I need change rock-ons Nginx-proxy-manager and owncloud. I must change configuration files in docker image for performance tunning.
I can not find dockerfile or docker-compose definition in my Rockstor.Where are they?
I know that, I can enter to running container and edit file, but container will loss change after restar.
Background: I had my Rockstor set up about 3 years ago under the Testing (free) channel, but it was in storage for a while, and when I tried to bring it online again the OS disk wouldn’t boot due to superblock issues, so I re-installed and subscribed to the Stable channel, though the data disks have remained the same, and I have added a new disk to the main pool (It is currently rebalancing).
Due to the re-install, I’ve lost my previous configuration, though I understand rock-on config backup was added between my last use and now, so no big loss.
The issue now is, I had Emby and Deluge running on the previous install, but I wanted to try out Jellyfin, so I made my own Rock-on file based on Emby’s. When I attempted to install it, it failed, pointing me to check the logs. I just wanted a running media server, so I tried the same with Emby, as I figured the given Rockon json might be better, but no luck. In fact it gave the same error in the logs.
Checking /opt/rockstor/var/log/rockstor.log reveals:
[07/Jun/2020 14:20:38] ERROR [storageadmin.views.rockon:85] Rockon (Jellyfin (Linuxserver.io)) is in pending state but there is no pending or failed task for it.
[07/Jun/2020 14:26:17] ERROR [storageadmin.views.rockon:85] Rockon (Emby server) is in pending state but there is no pending or failed task for it.
I’ve scanned the forums for posts similar to this, but haven’t found any with the same error, so I’m wondering if I’m missing something or somehow my fresh install got borked? Do let me know other logs I might check to help troubleshoot.
I have downloaded several iso files for 3.9.1 on various laptops and desktops and burned to dvd with nero burning and poweriso I now have a set of coasters. as all when run and set to test report corruption and suggest not to use to install.
can it be downloaded as torrent file, or a test run before burning to dvd
I’m not sure how useful this is for everyone but I thought I explain some tests I performed regarding the performance of an OLD OCZ Vertex SSD on a USB 2.0 interface.
This is not a benchmark of the SSD and this may not provide the same results for everyone.
For a Base line, I’m testing with older hardware (similar age to my RockStor box) that has these interfaces;
On-board SATA2 (3Gb/s) .
On-board USB 2.0 (0.48 GB/s)
PCIe USB 3.0 (5Gb/s)
I found the mini Tools Partiton Wizard has a “Disk Benchmark” tool, hence these are screenshots from there.
I used Deafult parameters except the test mode of performing Sequential & Random tests.
From my observations this uses a readable file system and performs tests within that location and hence is not an indicator of the entire drive performance.
(As you might gather from the screenshots, the tests were performed mostly in reverse order to this report.)
To Start With, lets assume this is the maximum performance I’d expect in this test I’d get from the SSD by having it connected to the SATA2 interface.
Now if I take the same SSD and attach it to a USB3.0 SATA Dock on the USB 3.0 interface and run the same test;
Using I now change the USB3.0 SATA Dock for a inline USB3.0 SATA Adaptor cable on the USB 3.0 interface and the results are very similar (I don’t plan to install a SATA Dock inside my Rockstor Box, so best to compare to actual hardware to be used);
Since my Target hardware is USB 2.0 capable, I’ll run the same test with a USB2.0 SATA Adaptor cable on the USB 3.0 interface ;
But will a USB 2.0 Device on a USB 3.0 port have the same performance on a USB 2.0 port, logic indicates it should but here are the results;
This drop in performance isn’t logical, perhaps the USB 2.0 SATA Adaptor isn’t the best fit, so I try the USB 3.0 SATA Adaptor on the SATA 2.0 Port;
Strange but this results in very similar performance regardless of the adaptor used or the fact the SSD can easily outperform all of the interfaces it is connected with.
But what Dose this mean for RockStor?
Basically if Rockstor’s Database uses small 4KB data chuncks to read and write, a USB 2.0 device is going to perform at under 8MB/s. If Rockstor’s Database requires more than 30MB/s of performance, the USB 2.0 interface should not be used.
I know, I know, bleeding edge is never a good idea. But I just had to upgrade my kernel and btrfs-progs again since I hadn’t have any issues before on my box doing so (running the CentOS base still with the 3.9.2.57 version).
Interestingly, (or scary), this time it did not go well.
The kernel upgrade itself (to 5.7) with the newest btrfs-progs 5.6.1 seemed to have gone fine (at least no error messages).
However, upon reboot the trouble started. The WebUI is inaccessible and things like the docker service failed (I assume, because the RockOn root is not mounted).
journalctl -xe | grep ‘docker’ shows that
rockstorw systemd[3578]: Failed at step EXEC spawning /opt/rockstor/bin/docker-wrapper: No such file or directory
and under
systemctl list-units --state=failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● docker.service loaded failed failed Docker Application Container Engine
lsblk shows
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 9.1T 0 disk
sdb 8:16 0 9.1T 0 disk
sde 8:64 0 9.1T 0 disk
sdc 8:32 0 9.1T 0 disk
sda 8:0 0 119.2G 0 disk
├─sda2 8:2 0 11.9G 0 part [SWAP]
├─sda3 8:3 0 106.8G 0 part /home
└─sda1 8:1 0 500M 0 part /boot
so, at least the block devices are there.
ls /mnt2/ gives me my shares, but, since nothing has been mounted, they’re obviously empty
btrfs subvolume list /mnt2/4xRAID5
shows me a ton of subvolumes connected to the RockOn root, but not the other shares
Obviously, I couldn’t leave good enough alone… But before I reinstall and try to remember what additional services at the OS level I have created or installed, I wanted to see whether the community can lend me a hand - even if it means “reinstall, you dummy, and don’t do this again until OpenSUSE is ready”
I’m trying to install RockStor into NVMe but it fails to install bootloader after post-install operations, so Rockstor cannot boot.
This is what i managed to get from logs.
First time i was troubleshooting EFI label and i figured out how to modify the label to get it working with UEFI, now i’m stuck on this error and cannot boot it after installation.
Can you please help me fixing this and get RockStor finally working?
Traceback (most recent call last):
File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/rockon.py”, line 406, in _get_available
cur_res = requests.get(cur_meta_url, timeout=10)
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/api.py”, line 55, in get
return request(‘get’, url, **kwargs)
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/api.py”, line 44, in request
return session.request(method=method, url=url, **kwargs)
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/sessions.py”, line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/sessions.py”, line 374, in send
r = adapter.send(request, **kwargs)
File “/opt/rockstor/eggs/requests-1.1.0-py2.7.egg/requests/adapters.py”, line 215, in send
raise Timeout(e)
Timeout: HTTPConnectionPool(host=‘rockstor.com’, port=80): Request timed out. (timeout=10)
I found people had the same problem
tried to use the troubleshooting
don’t work in my case
I don’t know what I need to setup first…
I created the share Rocks-on as requested 5Gb
tried to repair with the small wrench…
I want to install Rockstor in a VM on my Ubuntu 20.04 system to see how it works. I followed a guide which had me install and use Virt-Manager. I went through enabling storage , assigning 2GB of memory and 2 CPUs, then it showed me config and I started the install from the ISO. It soon stopped showing the list of errors massages in the screenshot below (not sure if I uploaded it properly.
I want to thank Phil for replying to an email I sent to support before I joined the forum. In my email I said I am very excited to find Rockstor, particularly as I want to implement and experience the use of BTRFS. I realise that a key element of BTFRS is currently presented as “Not for production” because of the hole from unclean stoppages, but for me the storage capacity efficiency that Raid5 provides, along with the BTRFS ability to add disks of different sizes to pools, are incredibly important. I have done a great deal of research on NAS systems and I have been disappointed that NAS systems like FreeNAS, NAS4Free, OMV and Unraid either do not support BTFRS or do so in a way that makes it hard for general NAS users to implement. Unraid is better, but the way it uses BTFRS is a big compromise, I think.
I am acquiring the hardware to build a good NAS with:
a Supermicro X11SCL-LN4F MB supporting 6 SATA drives,
a Fractal Design Node 804 case that can accommodate 8x 3.5in drives
a current model I3 CPU.
I have a good UPS for my current systems, and I plan on 3x 4TB HDDs to start plus a couple of SSDs for the system and the cache pool. I spent my whole career on the technical side of corporate IT, escaping to macOS when I retired 20 years ago, and I have built several personal systems over the last 30 years, including my current macOS Hackintosh/Linux system (I was able to build an iMac equivalent for much less).
I am a definite amateur when it comes to Linux (I am currently doing an Introduction to Linux course provided online by the Linux Foundation). But with help from the forum I hope to find my way.
I posted about my initial attempt to build Rockstor in KVM on my Ubuntu 20.04 system. For that I actually used the Rockstor 3.9.1 ISO, which surprisingly the Rockstor page on SourceForge page still says is the current version. I didn’t look more closely to see all the subsequent builds. Anyway, now understanding the move to openSUSE I am going to try install one of the openSUSE versions. I have installed VirtualBox to see if it is better for me than KVM. From what I’ve seen so far it is more informative and easier to follow. But Phil has suggested a different approach using KVM as a starter. I’ll see how I go with Virtual box first, as I have it ready to go.
Anyway, I am very glad to be able to participate in such a newbie-friendly community which aims to produce a proper BTRFS NAS.
Ive come a across an issue where i cannot get into my machine any more.
I changed the “root” share owner from root to my username and since then Im not able to access it from my GUI. I can access it from the machine itself but i dont know how to fix my issue, can anyone help?
As one of the drives in my Raid1 arrays has started showing some errors (most likely age-related), I need to replace it and would like to take the opportunity to expand my storage capacity as well.
Because I see multiple options in how to proceed, I wanted to ask for the community’s feedback based on people’s experience with such procedure. I’ve thus read through several relevant posts in the forum and the Github repositories, but given these span a rather wide period of time, their comments and recommendations may not be accurate anymore due to recent improvements in Rockstor and Btrfs itself.
I thus thought I would lay out the options I have in front of me and see what the consensus would be on each one of them. By providing as much information and resource as possible, I’m hoping this can benefit other users as well.
In this spirit, thanks for any experienced user who would correct any inaccuracy I might write.
Aims and requirements
My current data pool consists of:
Drive A: 3 TB HDD
Drive B: 3 TB HDD
Drives A and B are combined in a single Raid1 pool. Note also that the pool is rather full so I have about ~2.7 TB to deal with.
Unfortunately, Drive A needs to be replaced. In the end, I would thus like to have the following pool (still in Raid1):
Drive B: 3 TB HDD
Drive C: 8 TB HDD
Drive D: 8 TB HDD
I would also like to try doing everything from Rockstor webUI and avoid the command line, as an exercise and test of Rockstor. Overall time to complete the move is paramount, however, so if a cli-approach has a substantial advantage over a webUI-only approach, I’ll pick that.
Finally, this would be conducted once the openSUSE rebase has been completed, meaning it would be running kernel and btrfs versions of Leap 15.2.
Options
Quotas would be disabled prior to any operation.
Because I need to both addandreplace disks, there are several strategies combining replacement and addition of a drive. Notably, as I currently only have one SATA port free on my motherboard, I see the following options available to me:
option A:
Remove Drive A from the pool
Add Drive C and Drive D to the pool in one go
option B:
Add Drive C to the pool
Remove Drive A from the pool
Add Drive D to the pool
option C:
Replace Drive A with Drive C in the pool
Add Drive D to the pool
Comments
Option A
While this option seems straightforward, it would lead to the pool having a single device at the end of the first step, which would thus imply a conversion from Raid1 to single, which is time-consuming and demanding on IO events as well (if I’m correct, at least). Furthermore, it would also require me to switch again from single to Raid1 at the end of step 2, thereby adding even more time and IO wear on the drives. I would thus think option A as the least favorable.
Option B
If my understanding is correct, this procedure would create a balance at the end of each step listed, resulting in a total of 3 balances. A big advantage is that no Raid level conversion would be required.
In details, the procedure would be:
Turn off the machine
Plug in Drive C
Turn on Rockstor and use the “Resize/ReRaid” feature to add Drive C to the Raid1 pool
Monitor progress of the triggered Balance procedure using the “Balance” tab.
Once the balance has completed, use the “Resize/ReRaid” feature to remove Drive A from the pool.
Monitor progress of the triggered Balance procedure using the “Balance” tab.
Once the balance has completed, turn OFF the machine, unplug Drive A, plug in Drive D, turn ON Rockstor, and then use the “Resize/ReRaid” feature to add Drive D to the pool.
Monitor progress of the triggered Balance procedure using the “Balance” tab.
Once the balance has completed, use Rockstor as usual.
Option C
Similar to option B, option C would not require a conversion of Raid level. Furthermore, if my understanding is correct, option C would only imply two balances, an advantage over option B. Nevertheless, it would also not be possible to do entirely through the webUI as the implementation of a disk replacement is not yet implemented (see related Github tracking issue).
Although I still need to test it in a VM, the procedure would be similar to:
Turn off the machine
Plug in Drive C
Turn on Rockstor
Remote open an SSH session and use the btrfs replace start -r <Btrfs-device-id-of-DriveA> /dev/sd<letter-of-DriveC> /mnt2/pool_name.
Monitor status with: btrfs replace status /mnt2/pool_name.
Once completed, resize the filesystem to take advantage of the bigger drive: btrfs fi resize <Btrfs-device-id-of-DriveC>:max /mnt2/pool_name.
Note: here, I’m not sure how disks and pools would look like in Rockstor webUI… still need to test that one.
Once completed, turn OFF the machine, unplug DriveA, plug in Drive D, turn Rockstor ON, and use the “Resize/ReRaid” feature to add Drive D to the pool.
Monitor progress of the triggered Balance procedure using the “Balance” tab.
Once the balance has completed, use Rockstor as usual.
Overall
Between options B and C, it seems to me that the biggest difference lies in how efficient the add_C+remove_A procedure is when compared to replace_A_with_C. As I haven’t tested it yet, that’s something I wonder.
On one hand, the Github issue linked above reads:
N.B. it is generally considered to be a longer process to use replace rather than:
“btrfs dev add” and then “btrfs dev delete”, it might make sense to suggest this course of action in the same UI.
On the other hand–and a year more recent–the following forum post reads:
I think that the general opinion is that a ‘btrfs replace’ is the more preferred, read efficient, method to btrfs dev add, btrfs dev delete, or the other way around.
When you have a device that’s in the process of failing or has failed in a RAID array you should use the btrfs replace command rather than adding a new device and removing the failed one. This is a newer technique that worked for me when adding and deleting devices didn’t however it may be helpful to consult the mailing list of irc channel before attempting recovery.
Interpretations
Based on the information above, the btrfs replace route (option C) now seems to be the preferred method from an efficiency perspective but I’m unsure of how Rockstor would “deal” with it. Due to recent improvements in drive removal and pool attribution, I believe I should be able to remove any physically removed disk that would be detected as detached without too much problem, however.
If the gain in efficiency over option B is not that substantial, though, it may not be worth the additional “hassle”.
As mentioned, I still plan on testing option B and C in a VM and compare the overall time needed to complete each, for instance, but I would appreciate if others had any experience in similar operations as of late.
So I encountered some bad system disk issues after a couple hard reboots, and so decided to take the plunge into OpenSUSE land, and found that the system disk itself was the culprit (went read-only and had many parent transid / csum missing errors and such), so I figured I’d automate the post-install with an Ansible playbook, in case I needed to do it again.
I’m running Leap 15.2, so I haven’t tested on 15.1 or Tumbleweed (Avoided Tumbleweed because of Python3 issues which will get worked out later), but it should work well for Leap 15.1 systems as well.
Pro tip: During the user creation process, when you select “Skip User Creation” and set the root user password, you can load in an SSH key if you have another drive mounted with it, which makes running the playbook that much easier. I ended up logging in and ssh-copy-id ing myself, but a good note for the future.
Use at least a 20 GB system disk, this auto enables boot to snapshot functionality.
Server (non transactional)
Default partitioning
Skip user creation (then only root password to enter and leaves all additional users as Rockstor managed)
Optional, load ssh key from thumb drive for root user access via ansible
Click “Software” heading, then uncheck “AppArmor”, then click Next or Apply (I forget which it is specifically)
Disable Firewall
Leave SSH service enabled
Switch to NetworkManager
After the install and initial reboot, make sure you can log in using the previously created SSH Key, or create one now and load it via ssh-copy-id
Create an inventory file for ansible, something like the following in hosts.yml:
all:
hosts:
rockstor:
ansible_ssh_host: 192.168.88.100 # Use your server's IP here
ansible_user: root
Run the ansible playbook: ansible-playbook -i hosts.yml --private-key=~/.ssh/rockstor_ed25519.key main.yml
Enjoy your OpenSUSE-based Rockstor!
It works well enough, but it’s all one file, so it could use some improvements in the organizational department I’m sure, but wanted to share it and see if others thought it useful as well.
When building a Rockstor NAS for a friend, I chose WD Red drives for a RAID 1 array. Well, it works fine and he’s been quite happy with it for the past few months. But I recently heard about the rather shady issue of WD implementing SMR technology into the Red line of drives, and how they don’t play well with the ZFS file system. (9 days to rebuild an array!) Is the same true with btrfs?
The only issue so far has been writing large files is noticeably slower than I would normally expect. That isn’t really an issue as most of the files sent to the NAS are quite small, but I’m wondering if a drive should fail if we would be in for an inordinately long process of resilvering the mirror?
I so wish I would have gone with Seagate Iron Wolf.