Quantcast
Channel: Rockstor Community Forum - Latest topics
Viewing all 1941 articles
Browse latest View live

[Rockstor-bootstrap] allow for share mount to fail?

$
0
0

@Flox wrote:

I couldn’t find a title that would describe what I wanted to ask, but in short, I wonder whether it would be a good idea to allow some shares to fail mounting during Rockstor boot (rockstor-bootstrap.service).

I’ve come to wonder such things as I currently experienced a failure to mount all of my shares after rebooting the machine because one share failed to mount and led the rockstor-bootstrap service to fail:

[root@rockstor ~]# systemctl status rockstor-bootstrap
● rockstor-bootstrap.service - Rockstor bootstrapping tasks
   Loaded: loaded (/etc/systemd/system/rockstor-bootstrap.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2020-08-04 18:40:23 EDT; 4 days ago
 Main PID: 15745 (code=exited, status=1/FAILURE)

Aug 04 18:40:23 rockstor bootstrap[15745]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['In...d be decoded']
Aug 04 18:40:23 rockstor bootstrap[15745]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['In...d be decoded']
Aug 04 18:40:23 rockstor bootstrap[15745]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['In...d be decoded']
Aug 04 18:40:23 rockstor bootstrap[15745]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['In...d be decoded']
Aug 04 18:40:23 rockstor bootstrap[15745]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: ['In...d be decoded']
Aug 04 18:40:23 rockstor bootstrap[15745]: Max attempts(15) reached. Connection errors persist. Failed to bootstrap. Error: ['Internal Server Error: No JSON object could be decoded']
Aug 04 18:40:23 rockstor systemd[1]: rockstor-bootstrap.service: main process exited, code=exited, status=1/FAILURE
Aug 04 18:40:23 rockstor systemd[1]: Failed to start Rockstor bootstrapping tasks.
Aug 04 18:40:23 rockstor systemd[1]: Unit rockstor-bootstrap.service entered failed state.
Aug 04 18:40:23 rockstor systemd[1]: rockstor-bootstrap.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

The service failed due to the following error while mounting a particular share:

[04/Aug/2020 18:39:54] ERROR [system.osi:119] non-zero code(1) returned by command: ['/usr/sbin/btrfs', 'qgroup', 'show', '/mnt2/main_pool/Photos']. output: [''] error: ["ERROR: cannot access '/mnt2/main_pool/Photos': Input/output error", '']
[04/Aug/2020 18:39:54] ERROR [storageadmin.middleware:32] Exception occurred while processing a request. Path: /api/commands/bootstrap method: POST
[04/Aug/2020 18:39:54] ERROR [storageadmin.middleware:33] Error running a command. cmd = /usr/bin/mount -t btrfs -o subvolid=720 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5LLP56J /mnt2/Photos. rc = 32. stdout = ['']. stderr = ["mount: /dev/sdb: can't read superblock", '']
Traceback (most recent call last):
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/rockstor/src/rockstor/storageadmin/views/command.py", line 121, in post
    import_shares(p, request)
  File "/opt/rockstor/src/rockstor/storageadmin/views/share_helpers.py", line 204, in import_shares
    mount_share(nso, '%s%s' % (settings.MNT_PT, s_in_pool))
  File "/opt/rockstor/src/rockstor/fs/btrfs.py", line 607, in mount_share
    return run_command(mnt_cmd)
  File "/opt/rockstor/src/rockstor/system/osi.py", line 121, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/mount -t btrfs -o subvolid=720 /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5LLP56J /mnt2/Photos. rc = 32. stdout = ['']. stderr = ["mount: /dev/sdb: can't read superblock", '']

Yes, I know this doesn’t look good for this share (it actually doesn’t matter and is a separate problem for a separate thread, I believe), but what I would to point out is that all the other shares can be mounted successfully individually:

/usr/bin/mount -t btrfs -o subvolid=<subvolid> /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5LLP56J /mnt2/<share-name>

I was thus wondering whether it would be a good idea to allow for such failures to happen and not fail the bootstrap procedure. In this context, for instance, all the other shares (and services relying on these shares) would still be functioning properly while only the “bad” share would be displayed as problematic (unmounted).

In my one particular case, this would be helpful (as long as there’s an element in the UI letting me know that a problem occured during bootstrap, pointing me to the problematic share(s)), but I’m not sure whether cases where such a mechanism would be more problematic than helpful.

Any feedback/insight?

Posts: 1

Participants: 1

Read full topic


Handbrake Autovideoconverter

$
0
0

@amatthews wrote:

I need help configuring the handbrake rockon. It has an automatic video converter but for the life of me I cannot figure out to change the default presets to another preset that better suits my needs. The GUI settings have no effect on the background converter.

Thanks
Anthony

Posts: 2

Participants: 2

Read full topic

SMB access permission mystery

$
0
0

@ceh-u wrote:

### Brief description of the problem
I’m running Rockstor 4.0.1.0 and had set up a couple of SMB shares. I’m accessing the shares with files from my OneDrive folder on macOS 10.15.6 (Catalina). Today in Mac I have been getting red minus signs on the corner of share folders, and cannot see the contents. This is a standard macOS Permissions indicator of not having access permission. The same happens to any folder from my Mac account.

### Detailed step by step instructions to reproduce the problem
Any file folder on my Mac I copied to the Share folder would immediately be tagged like this.

Initially the NAS owning user of the share was set in Rockstor as my admin user, which wasn’t known to macOS. But I had had no problems before today accessing and opening files in the folder. I also have a share used by Mac for its Time Machine disk. This has been unaffected by the problem.

Seemed to me this was obviously an access permission issue so I tried:

changing the share owner to a user also defined in macOS - no change
adding in macOS a user with the same name and pw as the initial NAS owner ID - no change

Then, looking for a place to post this problem, I saw another post, different aspect of Samba, where the poster was pointed to the Rockstor Samba documentation, where is says to include the share name after the NAS ip in the connect command. Bingo! The share-contained folders and files appeared OK.

But then I had to restart my Mac, and when it came up the share at issue had the denied permission marks.
I have no idea why this problem occurs.

Posts: 2

Participants: 1

Read full topic

Btrfs Balance error

$
0
0

@ceh-u wrote:

Brief description of the problem

3 disk Raid5, 840GB used out of 6.46GB. Ran balance which went on for several hours, At completion, error message:
"Error running a command. cmd = btrfs balance start --full-balance /mnt2/CEHPool. rc = 1. stdout = [’’]. stderr = [“ERROR: error during balancing ‘/mnt2/CEHPool’: No space left on device”, ‘There may be more info in syslog - try dmesg | tail’, ‘’]
The dmesg log is empty.

Detailed step by step instructions to reproduce the problem

I haven’t run balance again. Want to investigate issue first. I can’t understand the part of the message “No space left on device” given 12% space usage,

Web-UI screenshot

[Drag and drop the image here]

Error Traceback provided on the Web-UI

[paste here]

Posts: 1

Participants: 1

Read full topic

NordVPN on Rockstor

$
0
0

@ceh-u wrote:

I’m a long-time NordVPN user, and as I don’t have a router that supports VPN, I installed NordVPN in Rockstor and connected. That prevented me from accessing the NAS by the Web GUI and from connecting to the NAS for shares.

Eventually I found an Internet discussion which said whitelisting the LAN subnet allows other devices on the LAN to access it. It works perfectly so far. NordVPN has a whitelist command.

Posts: 1

Participants: 1

Read full topic

Installed from 3.9.1.iso but no Rockstor running

$
0
0

@Tony_Cristiano wrote:

I downloaded the latest iso and installed it onto my hardware. This resulted in Fedora 31 being installed and no Rockstor service. What is going on with this iso?

Posts: 4

Participants: 2

Read full topic

No spin-down on spare disk

$
0
0

@Tony_Cristiano wrote:

My non-active disk drive does not go to sleep at all with version 4.0.1. This drive is not attached to anything. It use to sleep all the time in version 3.9.1 with the same configuration.

It’s a brand new disk model WDC WD40EFAX-68JH4N0

The Rockstor dashboard shows that it’s reading from it every 30 seconds
image

Why is this happening?
Tony

Posts: 1

Participants: 1

Read full topic

Problem after intitial install


Error: insert or update on table "storageadmin_networkdevice" violates foreign key constraint

$
0
0

@Tony_Cristiano wrote:

Brief description of the problem

I went to the “Network” on the web-UI and it showed this issue.

Detailed step by step instructions to reproduce the problem

I don’t know how it got there

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py", line 41, in _handle_exception yield File "/opt/rockstor/src/rockstor/storageadmin/views/network.py", line 201, in get_queryset self._refresh_devices() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner return func(*args, **kwargs) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/transaction.py", line 225, in __exit__ connection.commit() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 173, in commit self._commit() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 142, in _commit return self.connection.commit() File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/utils.py", line 98, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/db/backends/base/base.py", line 142, in _commit return self.connection.commit() IntegrityError: insert or update on table "storageadmin_networkdevice" violates foreign key constraint "connection_id_refs_id_1db23ec5" DETAIL: Key (connection_id)=(2) is not present in table "storageadmin_networkconnection".

Posts: 1

Participants: 1

Read full topic

Website not https

Upgrading Drives

$
0
0

@NoirXIII wrote:

I currently have a NAS with 4 drive bays with 4x4TB drives in a RAID 10 but I’m running out of space. I have two 12TB drives I would like to use to increase the capacity of the pool but I’m not sure on the best method of upgrading the drive sizes without an extra slot to add the new drives to the pool before removing two of the older 4TB drives. I do have an external hard drive enclosure I can plug into the NAS via USB but not sure if adding the drive to the pool that way before removing one of the others is a good or bad idea as the drive name will change once moved into the actual NAS.

Any help would be appreciated

System info: ROCKSTOR 3.9.2-28 & Linux: 4.12.4-1.el7.elrepo.x86_64

Posts: 5

Participants: 2

Read full topic

No space on device, system drive showing as 100% used when its only about 50% used

$
0
0

@MattWatson wrote:

Brief description of the problem

So i was trying to work out how much space i was realistically going to need for my backups (never got to do them before this happened), and so i ran du -hs * in /mnt2 this proceeded to basically crash the system, and upon finally coming back im getting error messages about the system being “out of space”

Run a df -h to see whats going on, because i knew i had around 50GB free space on the system drive/

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.5G     0  7.5G   0% /dev
tmpfs           7.5G     0  7.5G   0% /dev/shm
tmpfs           7.5G  761M  6.7G  10% /run
tmpfs           7.5G     0  7.5G   0% /sys/fs/cgroup
/dev/sde3       104G   53G     0 100% /
tmpfs           7.5G     0  7.5G   0% /tmp
/dev/sde3       104G   53G     0 100% /home
/dev/sde1       477M  199M  250M  45% /boot
tmpfs           1.5G     0  1.5G   0% /run/user/0

As hou can see the issue appears to be here

/dev/sde3 104G 53G 0 100% /

only 53GB used of 104GB but no more space available.

This has now left me with no webgui, non of my pools/shares mounted.

what i am seeing is systemd-journald using a lot of CPU, and if i run journalctrl -f i am getting this repeating at an unreadable rate

Sep 03 03:21:53 hulk kernel: ------------[ cut here ]------------
Sep 03 03:21:53 hulk kernel: WARNING: CPU: 0 PID: 2991 at fs/btrfs/qgroup.c:2955 btrfs_qgroup_free_meta+0xde/0xf0 [btrfs]
Sep 03 03:21:53 hulk kernel: Modules linked in: ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter nct6775 hwmon_vid sunrpc dm_mirror dm_region_hash dm_log dm_mod dax ppdev eeepc_wmi asus_wmi sparse_keymap rfkill edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc ext4 jbd2 mbcache aesni_intel crypto_simd glue_helper cryptd pcspkr k10temp joydev input_leds shpchp sg sp5100_tco i2c_piix4 snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdmi parport_pc parport snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device tpm_infineon snd_pcm snd_timer snd soundcore video wmi acpi_cpufreq ip_tables btrfs xor raid6_pq sd_mod crc32c_intel r8169 mii ahci libahci amdkfd libata amd_iommu_v2 radeon i2c_algo_bit drm_kms_helper syscopyarea sysfillrect
Sep 03 03:21:53 hulk kernel:  sysimgblt fb_sys_fops ttm drm uas usb_storage
Sep 03 03:21:53 hulk kernel: CPU: 0 PID: 2991 Comm: in:imjournal Tainted: G        W       4.12.4-1.el7.elrepo.x86_64 #1
Sep 03 03:21:53 hulk kernel: Hardware name: PC SPECIALIST System Product Name/A88XM-PLUS, BIOS 2903 03/10/2016
Sep 03 03:21:53 hulk kernel: task: ffff880409282d80 task.stack: ffffc90002fb0000
Sep 03 03:21:53 hulk kernel: RIP: 0010:btrfs_qgroup_free_meta+0xde/0xf0 [btrfs]
Sep 03 03:21:53 hulk kernel: RSP: 0018:ffffc90002fb3be0 EFLAGS: 00010206
Sep 03 03:21:53 hulk kernel: RAX: 0000000000000102 RBX: ffff880406120000 RCX: 0000000000000000
Sep 03 03:21:53 hulk kernel: RDX: 0000000000000000 RSI: 0000000000014000 RDI: ffff88040725e000
Sep 03 03:21:53 hulk kernel: RBP: ffffc90002fb3c08 R08: 0000000000000000 R09: 0000000000001400
Sep 03 03:21:53 hulk kernel: R10: 00000000ffffffe4 R11: 00000000000002a9 R12: ffff88040725e000
Sep 03 03:21:53 hulk kernel: R13: ffff880406120000 R14: 0000000000014000 R15: 0000000000014000
Sep 03 03:21:53 hulk kernel: FS:  00007f0885419700(0000) GS:ffff88041ec00000(0000) knlGS:0000000000000000
Sep 03 03:21:53 hulk kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Sep 03 03:21:53 hulk kernel: CR2: 00007f85a5314000 CR3: 00000003f5dd0000 CR4: 00000000000406f0
Sep 03 03:21:53 hulk kernel: Call Trace:
Sep 03 03:21:53 hulk kernel:  start_transaction+0x378/0x440 [btrfs]
Sep 03 03:21:53 hulk kernel:  btrfs_start_transaction+0x1e/0x20 [btrfs]
Sep 03 03:21:53 hulk kernel:  btrfs_create+0x5a/0x220 [btrfs]
Sep 03 03:21:53 hulk kernel:  path_openat+0xf1f/0x13b0
Sep 03 03:21:53 hulk kernel:  ? unix_dgram_sendmsg+0x2b1/0x690
Sep 03 03:21:53 hulk kernel:  do_filp_open+0x91/0x100
Sep 03 03:21:53 hulk kernel:  ? __alloc_fd+0x46/0x170
Sep 03 03:21:53 hulk kernel:  do_sys_open+0x124/0x210
Sep 03 03:21:53 hulk kernel:  SyS_open+0x1e/0x20
Sep 03 03:21:53 hulk kernel:  entry_SYSCALL_64_fastpath+0x1a/0xa5
Sep 03 03:21:53 hulk kernel: RIP: 0033:0x7f0888cb277d
Sep 03 03:21:53 hulk kernel: RSP: 002b:00007f0885417c30 EFLAGS: 00000293 ORIG_RAX: 0000000000000002
Sep 03 03:21:53 hulk kernel: RAX: ffffffffffffffda RBX: 00007f0878000020 RCX: 00007f0888cb277d
Sep 03 03:21:53 hulk kernel: RDX: 00000000000001b6 RSI: 0000000000000241 RDI: 00007f0885417cd0
Sep 03 03:21:53 hulk kernel: RBP: 00007f0878000020 R08: 00007f0886d8ed8c R09: 0000000000000240
Sep 03 03:21:53 hulk kernel: R10: 0000000000000024 R11: 0000000000000293 R12: 00007f0885418db0
Sep 03 03:21:53 hulk kernel: R13: 00007f0875aa174b R14: 0000000000000020 R15: 000000000000000a
Sep 03 03:21:53 hulk kernel: Code: 48 8b 03 4d 89 f7 49 f7 df 48 8b 7b 08 48 83 c3 18 4c 89 fa 4c 89 e6 ff d0 48 8b 03 48 85 c0 75 e8 49 8b 84 24 38 03 00 00 eb 98 <0f> ff eb 86 0f 0b 66 90 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44
Sep 03 03:21:53 hulk kernel: ---[ end trace 905ef2e4814bc84a ]---

Any help will be greatly appreciated.

Thanks in advance
Matt

Posts: 1

Participants: 1

Read full topic

Found a failed Task (2d854592-df69-4cd9-b1c7-69b25dadf520) in the future of a pending Task

$
0
0

@chrisdingle wrote:

[Please complete the below template with details of the problem reported on your Web-UI. Be as detailed as possible. Community members, including developers, shall try and help. Thanks for your time in reporting this issue! We recommend purchasing commercial support for expedited support directly from the developers.]

Brief description of the problem

Unable to view rockons page in the UI. Receive an error message.

Detailed step by step instructions to reproduce the problem

Have been working through a number of problems overnight. Started with duplicati being stuck in a task loop. Resolved through deleting the rockon. Have since also deleted transmission - open vpn. Now persistently receive this error after several reboots.

Web-UI screenshot

image

Error Traceback provided on the Web-UI

Traceback (most recent call last): File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 68, in run_for_one self.accept(listener) File "/opt/rockstor/eggs/gunicorn-19.7.1-py2.7.egg/gunicorn/workers/sync.py", line 27, in accept client, addr = listener.accept() File "/usr/lib64/python2.7/socket.py", line 202, in accept sock, addr = self._sock.accept() error: [Errno 11] Resource temporarily unavailable

Posts: 1

Participants: 1

Read full topic

Snapshot delete error

$
0
0

@oy_delovoy wrote:

Brief description of the problem

I cannot delete the snapshot

Detailed step by step instructions to reproduce the problem

[write here]

Web-UI screenshot

%D0%91%D0%B5%D0%B7%D1%8B%D0%BC%D1%8F%D0%BD%D0%BD%D1%8B%D0%B9

Error Traceback provided on the Web-UI

        Traceback (most recent call last):

File “/opt/rockstor/src/rockstor/rest_framework_custom/generic_view.py”, line 41, in _handle_exception
yield
File “/opt/rockstor/src/rockstor/storageadmin/views/snapshot.py”, line 223, in delete
self._delete_snapshot(request, sname, snap_name=snap_name)
File “/opt/rockstor/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py”, line 145, in inner
return func(*args, **kwargs)
File “/opt/rockstor/src/rockstor/storageadmin/views/snapshot.py”, line 208, in _delete_snapshot
remove_snap(share.pool, sname, snapshot.name)
File “/opt/rockstor/src/rockstor/fs/btrfs.py”, line 589, in remove_snap
log=True)
File “/opt/rockstor/src/rockstor/system/osi.py”, line 115, in run_command
raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /sbin/btrfs subvolume delete /mnt2/Data/test1. rc = 1. stdout = [“Delete subvolume (no-commit): ‘/mnt2/Data/test1’”, ‘’]. stderr = [“ERROR: cannot delete ‘/mnt2/Data/test1’: Directory not empty”, ‘’]

Posts: 1

Participants: 1

Read full topic

Problems shares

$
0
0

@coolbit wrote:

On Shares is presenting error "Share size enforcement is temporarily disabled due to incomplete support in BTRFS. Until this status changes, the effective size of a Share is equal to the size of the Pool it belongs to. " how can i solve the problem? I’ve done a lot of research but I couldn’t find the right solution.

Posts: 2

Participants: 2

Read full topic


Error while attempting to create SMB export

$
0
0

@legion411 wrote:

Brief description of the problem

This error appears when attempting to create samba export: “Unknown internal error doing a GET to /api/users?page=1&format=json&page_size=9000&count=”

Detailed step by step instructions to reproduce the problem

Click ‘Storage’–>‘File Sharing’–>Samba’–>‘Add Samba Export’

Web-UI screenshot

image

Error Traceback provided on the Web-UI

NONE

Also when clicking link to automatically create supprt ticket. Fails, just opens a new window with lots of text (api fault?)

Posts: 8

Participants: 2

Read full topic

Top Shares by Usage 3.9.1-0 - displays 0 bytes but also shows usage

$
0
0

@Absenth wrote:

I see a few topics from 2015, and 2016 that are similar to what I’m seeing…

I imported disks when I had to reinstall rockstor, and now the “top shares by usage” dashboard widgit isn’t working properly. If you look, the two “backup” folders both have quite a bit of data in them, but the widget displays 0 bytes.

I’m ok, with just turning that widget off, but I wanted to also make sure I shared what I found in case this wasn’t a known issue.

top-shares

Posts: 1

Participants: 1

Read full topic

Issue changing from raid1 to raid0

$
0
0

@Jonas_Vinge wrote:

hi i get an error message when i try to change a pool from raid1 to raid0.

Error running a command. cmd = btrfs balance start -mconvert=raid0 -dconvert=raid0 /mnt2/Cloudpool. rc = 1. stdout = [’’]. stderr = [“ERROR: error during balancing ‘/mnt2/Cloudpool’: Invalid argument”, ‘There may be more info in syslog - try dmesg | tail’, ‘’]

im a noob and i was actually trying to change the hard drive i have nextcloud on because i see that i might need more space. so i fucked around with stuff i dont know enough about. i tried to include the new drive in the pool via raid0 and delete the original drive from the pool but got tons of errors. then decided to get them into raid1 before googling if that would work… now i cant get them back to raid0.

please help. tips about how to move all my nextcloud stuff to a new drive properly would also be nice.

Posts: 2

Participants: 2

Read full topic

Pool full - system unstable - pools no longer mount

$
0
0

@erisler wrote:

Hello, recently my data pool (named “pool1”, NOT the system pool) has filled up. The system was unresponsive so I power cycled the pc. Rebooting reveals that the pools don’t mount. The Bootstrap service is not running and will not start.

I have read various threads on this symptom but none have worked for me. After a reboot, if I attempt to start the bootstrap service (systemctl start rockstor-bootstrap.service) my pool1 mounts. I have since removed some files and rebooted but still no luck getting the service to start properly.

[root@ymtrockstor ~]# journalctl -n 50
-- Logs begin at Mon 2020-09-14 18:30:12 EDT, end at Mon 2020-09-14 19:33:01 EDT. --
Sep 14 18:32:03 ymtrockstor bootstrap[2658]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: Exception while setting access_token for url(h
Sep 14 18:32:03 ymtrockstor bootstrap[2658]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: Exception while setting access_token for url(h
Sep 14 18:32:03 ymtrockstor bootstrap[2658]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: Exception while setting access_token for url(h
Sep 14 18:32:03 ymtrockstor bootstrap[2658]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: Exception while setting access_token for url(h
Sep 14 18:32:07 ymtrockstor bootstrap[2658]: Exception occured while bootstrapping. This could be because rockstor.service is still starting up. will wait 2 seconds and try again. Exception: Exception while setting access_token for url(h
Sep 14 18:32:07 ymtrockstor bootstrap[2658]: Max attempts(15) reached. Connection errors persist. Failed to bootstrap. Error: Exception while setting access_token for url(http://127.0.0.1:443): No JSON object could be decoded. content: N
Sep 14 18:32:07 ymtrockstor systemd[1]: rockstor-bootstrap.service: main process exited, code=exited, status=1/FAILURE
Sep 14 18:32:07 ymtrockstor systemd[1]: Failed to start Rockstor bootstrapping tasks.
Sep 14 18:32:07 ymtrockstor systemd[1]: Unit rockstor-bootstrap.service entered failed state.
Sep 14 18:32:07 ymtrockstor systemd[1]: rockstor-bootstrap.service failed.
Sep 14 18:32:07 ymtrockstor systemd[1]: Starting Samba SMB Daemon...
Sep 14 18:32:07 ymtrockstor systemd[1]: smb.service: Supervising process 3412 which is not our child. We'll most likely not notice when it exits.
Sep 14 18:32:07 ymtrockstor smbd[3412]: [2020/09/14 18:32:07.635188,  0] ../lib/util/become_daemon.c:138(daemon_ready)
Sep 14 18:32:07 ymtrockstor smbd[3412]:   daemon_ready: STATUS=daemon 'smbd' finished starting up and ready to serve connections
Sep 14 18:32:07 ymtrockstor systemd[1]: Started Samba SMB Daemon.
Sep 14 18:32:07 ymtrockstor systemd[1]: Reached target Multi-User System.
Sep 14 18:32:07 ymtrockstor systemd[1]: Reached target Graphical Interface.
Sep 14 18:32:07 ymtrockstor systemd[1]: Starting Update UTMP about System Runlevel Changes...
Sep 14 18:32:07 ymtrockstor systemd[1]: Started Stop Read-Ahead Data Collection 10s After Completed Startup.
Sep 14 18:32:07 ymtrockstor systemd[1]: Started Update UTMP about System Runlevel Changes.
Sep 14 18:32:07 ymtrockstor systemd[1]: Startup finished in 1.900s (kernel) + 2.583s (initrd) + 1min 52.555s (userspace) = 1min 57.039s.
Sep 14 18:32:37 ymtrockstor systemd[1]: Starting Stop Read-Ahead Data Collection...
Sep 14 18:32:37 ymtrockstor systemd[1]: Started Stop Read-Ahead Data Collection.
Sep 14 18:32:52 ymtrockstor chronyd[2166]: Selected source 206.108.0.133
Sep 14 18:36:58 ymtrockstor dbus[2161]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service'
Sep 14 18:36:58 ymtrockstor systemd[1]: Starting Hostname Service...
Sep 14 18:36:58 ymtrockstor dbus[2161]: [system] Successfully activated service 'org.freedesktop.hostname1'
Sep 14 18:36:58 ymtrockstor systemd[1]: Started Hostname Service.
Sep 14 18:45:10 ymtrockstor systemd[1]: Starting Cleanup of Temporary Directories...
Sep 14 18:45:10 ymtrockstor systemd[1]: Started Cleanup of Temporary Directories.
Sep 14 18:48:39 ymtrockstor kernel: BTRFS info (device sdc): disk space caching is enabled
Sep 14 18:48:39 ymtrockstor kernel: BTRFS info (device sdc): has skinny extents
Sep 14 18:48:39 ymtrockstor kernel: BTRFS info (device sdc): bdev /dev/sdd errs: wr 0, rd 11, flush 0, corrupt 0, gen 0
Sep 14 19:01:01 ymtrockstor systemd[1]: Created slice User Slice of root.
Sep 14 19:01:01 ymtrockstor systemd[1]: Started Session 1 of user root.
Sep 14 19:01:01 ymtrockstor CROND[7232]: (root) CMD (run-parts /etc/cron.hourly)
Sep 14 19:01:01 ymtrockstor run-parts(/etc/cron.hourly)[7235]: starting 0anacron
Sep 14 19:01:01 ymtrockstor run-parts(/etc/cron.hourly)[7241]: finished 0anacron
Sep 14 19:01:01 ymtrockstor run-parts(/etc/cron.hourly)[7243]: starting 0yum-hourly.cron
Sep 14 19:09:23 ymtrockstor run-parts(/etc/cron.hourly)[7298]: finished 0yum-hourly.cron
Sep 14 19:09:23 ymtrockstor systemd[1]: Removed slice User Slice of root.
Sep 14 19:31:39 ymtrockstor dbus[2161]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service'
Sep 14 19:31:39 ymtrockstor systemd[1]: Starting Hostname Service...
Sep 14 19:31:39 ymtrockstor dbus[2161]: [system] Successfully activated service 'org.freedesktop.hostname1'
Sep 14 19:31:39 ymtrockstor systemd[1]: Started Hostname Service.
Sep 14 19:33:01 ymtrockstor sshd[9451]: Accepted password for root from 10.211.211.118 port 24125 ssh2
Sep 14 19:33:01 ymtrockstor systemd[1]: Created slice User Slice of root.
Sep 14 19:33:01 ymtrockstor systemd-logind[2175]: New session 2 of user root.
Sep 14 19:33:01 ymtrockstor systemd[1]: Started Session 2 of user root.
Sep 14 19:33:01 ymtrockstor sshd[9451]: pam_unix(sshd:session): session opened for user root by (uid=0)

[root@ymtrockstor ~]# systemctl start rockstor-bootstrap.service
Job for rockstor-bootstrap.service failed because the control process exited with error code. See "systemctl status rockstor-bootstrap.service" and "journalctl -xe" for details.

bootstrap journalctl

Posts: 1

Participants: 1

Read full topic

Opencloud not working

$
0
0

@tyson wrote:

Hi all,
I recently tried to install Owncloud on my NAS and whenever i go to open the UI it keeps on coming up with this: image
It is running on a fresh install of Rockstor and the only other add-on i have is the HTTP to HTTPS redirect. I have the owncloud port set to 8080. Is it something that i have done or is it looking for something that i have not got installed\enabled on my NAS?

Thanks for your help.

Posts: 1

Participants: 1

Read full topic

Viewing all 1941 articles
Browse latest View live


Latest Images