A list of puns related to "Disk Storage"
Hi people :)
I have been using Proxmox for quite some time now and am looking to change my hardware (again) ;) As my coming setup will probably allow me to use 2 PCIe-NVMe-SSDs and n SATA3-SSDs I am wondering whether, specifically in Proxmox, I should prioritize
Of course my ultimate goal is service performance - so the question is probably whether Proxmox really does that much IO on its root drives as often stated, or whether "payload data" storage is more important.
Any opinions/experiences are highly appreciated :)
I recently installed Proxmox, after using ESXi. Got everything up and running, however I donβt see how I add a virtual disk to available storage. Does anyone have any tips, or resources that might help me add a drive? Thanks in advance!
Hi, I have an old Dell MD3200i storage (no longer supported).
One disk failed, and I am looking for replacement online.
Failed disk details:
Dell
P/N: 9PN066-150
MODEL NUMBER: ST9600204SS
Image is here:
I found this:
https://www.amazon.it/Seagate-Technology-ST9600204SS-Savvio-HardDisk/dp/B0047O25FI
but between comments I read:
>Drive is not certified for use in a Dell MD series SAN. It results in a "Drive Incompatible" error and Dell states it is due to substandard firmware, which is not updatable.
It is actually not possible to flash disk firmware?
In case, should I only go for original Dell disk, like this one?
https://www.ebay.it/itm/175066101745
Thank you :)
I could get a lot of old 8GB DDR3 memory modules from work for free. They would be ideal for use as a ram disk to do transcoding and the like.
The mainboard in my server only supports DDR4 memory. I know I could just buy more of it but considering the current prices I am not willing to spend any money on my server.
So does something like a PCI-e to DDR3 RAM adapter card exist?
I found some for DDR1 memory.
Hi all
This has been happening to me in vCentre 7.0.3 a lot, where a hot migration vmotion fails when it's a large disk, this one is 3TB
I have 10TB free on both my destination and source datastores though so don't understand
Anyone else see this ?
This is a short one but great for Linux users to read.
TL;DRWas downloading Arch Linux, fucked up the USB stick, and while fixing that I accidentally deleted the contents of my main storage disk.
So, I am new to Linux in general, and I thought that about the genius idea of downloading Arch Linux (google it, pretty much one of the most advanced Linux distributions out there) on a USB stick. While downloading it I had to partition it, which means the computer will detect it as two different sticks later on. After seeing that I had screwed up the installing, since my computer wouldn't boot up from it (I had changed it in the BIOS to try to boot up from USB first), I went to format it, which was harder than I thought.
Now here comes the fun part: as I went to Diskpart in cmd, I didnt pay attention to which volume I had selected when I deleted it. It was my main storage disk. Thing is, formatting isn't that fast of a process on a 1TB HDD. It is 5:30 AM here and I have to wait for it to format for at least an hour.
Hi all,
I recently built a small home server and it has 3 drives (1 SSD, 2x 4TB HDD). Proxmox VE is installed on the SSD and I have it up and running with one LXC so far running pi-hole.
I want to turn the 2 HDDs into a shared storage I can access from my windows desktop on my network. These will just be purely for content (storing movies, pictures, files). I also want it to be accessible from the VM or LXC I create for my Plex server and torrent/seed server.
What is the ideal configuration for this? I did some research and I see people recommending TrueNAS through a VM or installing something like SAMBA directly in Proxmox. ZFS seems like it is not ideal for me because I want a total of 8TB capacity and the ZFS RAID0 is not recommended.
Thanks in advance
If I use a second sdd for just meta data, how much space? Also thinking of using a USB 3.1 64gb for meta data. Thoughts?
Hello everyone. Sorry if noob question, I'm still learning VMware products. We have vCenter and several vSphere hosts with simple iSCSI and host datastores without vSAN. At vCenter, there are some default VM storage policies. When I'm creating a new VM I choose "Management storage policy - thin" but every datastore in the list has a warning "Datastore does not match current VM policy". I tried to create a new VM storage policy but can't find anything about thin provisioning in Host-based services. It is only in Datastore specific rules - vSAN storage rules. Is there a way to create a VM storage policy with thin provisioning usable for simple datastores without vSAN?
All of my disks show, and they show in the VMs as well. However I would like to install VMs on a different drive and the drive doesn't show up in proxmox. Does anyone know how I can add drives in storage so I can install VMs to different drives?
I'm super new to VMs, so sorry about what might be a dumb question, but everything I've googled on virtual storage has confused me even more, and I'm unclear on whether that's a thing.
I think I have made the decision to move to unraid for my new build.
At the moment I have 6 x 8TB disks in my server, I also have a handful of smaller disks (4 x 2tb, 4 x 4tb)
I want to migrate from my existing setup to something running Unraid, but ideally want to get the data of the bigger drives before moving these into the unraid system.
If I was to set up the new system with 6 x 4TB and 2 x 2TB, this "should" give me the capacity to move everything over. Would I then be able to replace some of the disks with 8tb disks from the old setup and slowly increase my available storage?
Second question, is it A) possible and B) advisable to migrate VM's from ESXi 7 to a system running unraid. Or would it make more sense to start from scratch and then move the data over from one running VM to another?
TIA
Hi,
I have a QNAP TS-251 device with two 4TB drives in RAID 1.
Disk 2 crashed a week ago and I purchased two 6 TB drives instead.
So I inserted the first one in place of the one that previously crashed. After rebuilding was finished I replaced the other 4TB drive with the new 6TB drive.
Now my problem is that I can't find a way on how to expand from 4TB to 6TB. I found these instructions but this was not my case because I had one of my drives in RAID missing.
How can I expand the Storage Pool/Static Volume by replacing the disks with larger capacity drives?(QTS 4.3.4/4.3.5) | QNAP
Any idea?
I attached a screenshot of the current situation. The RAID group has 3.63 TB capacity, while each disk within the RAID has 5.46 TB capacity.
https://preview.redd.it/nmax63gi6d381.jpg?width=907&format=pjpg&auto=webp&s=b5222904eba4ca28d8ff7f6a262b51a02cdc0d1d
But if I press expand I get this strange screen below which leads to nowhere.
https://preview.redd.it/geotbgey6d381.jpg?width=897&format=pjpg&auto=webp&s=1d5ef2b5ef6eb42795a1947a2c9faeb27910f98b
What's the best way to compactly store a large quantity of optical disks? I have many CDs, DVDs, BDs, video games and miscellaneous disks. Most are in their original cases, but I'd like to toss the cases and store at a higher density. I don't like using those spindles that CD-Rs come on because one rogue speck of grit between two disks can lead to disaster. I don't want to use disk wallets either - had one full of disks and was horrified to discover that chemicals from the pocket material had leached out and etched a hazy honeycomb pattern onto some of the disks after ~5 years of storage, though I was able to polish this off. There's got to be a better way.
I have been working on a simple in-memory key-value database for learning purposes written in Go. Now I want to understand how data persistance works and how I can implement it from scratch? I've read some things about B-Trees but I'm not sure how that can be used for persistance. I am looking for resoruces, which might be suitable for a novice like me, It doesn't have to be in Go. Also I am doing this for learning purposes so please don't suggest using any libraries.
Thanks :)
How can I change the storage of witnet. I'm running it into Docker.
A changed the
db_path = ".witnet/storage"
to
db_path = "D: .witnet/storage"
But it's still saving all data to the first one. Please help.
I've been playing with my storage spaces setup and tinkering with settings - read lots of articles about interleave, number of columns, redundancy disks, etc. I upgraded from server 2019 to 2022 since I read it had some better storage space parity support, but I haven't seen any difference in my tests.
I've got 10 HDDs that are 6TB each, and two 512GB SSDs.
I've enabled failover clustering so I can get the StorageBusCache option for two SSDs because my write speeds are maxing out around 70MBps with a parity setup. One HDD by itself (all tested individually) get around 180-220MBps each. I've read the parity on Storage Spaces is terrible but I have seen some articles about using the SSD drives as cache for write speeds, but I cannot seem to figure out how to make that happen.
At one point I used ONLY four of my 12 HDDs and an older SSD I had laying around and I got write speeds of 220MBps - which is the exact write speeds of the older SSD drive, so I bought a few better ones that test at 530MBps - but no matter what powershell config I give it to create the vdisk or storage pool, I'm still getting terrible write speeds. I'd prefer to have a raid 6 type setup with 10 or 12 disks for safety but even a raid 50 type setup would be ok for the purpose of these drives.
Anyone here use storage spaces and figure out the magic commands for the parity nonsense that MS has put together.
Obviously when I put all 12 HDDs alone in a simple vdisk I get almost 2000MBps read and write... mirror I get about 1.2GBps. That's without the SSDs in the mix at all... What is the best solutions in a windows environment to get some parity storage with at least single drive write speeds?
Also - I cannot seem to even get BOTH SSDs into the same storage pool - it only shows one SSD when scanning for storage - and if I use one - then the other SSD becomes available to use elsewhere, but fails whenever I try to create a storage pool with it. Is that a limit of the StorageBusCache parameters? Only one SSD can ever be used? Example - I tried creating two pools of 5 HDDs and 1SSD - first pool was successful, and second pool failed and said the error: "one or more physical disks encountered an error while creating the storage pool" and after trial and error was the SSD. Drive works fine by itself - I've reset it - cleaned it with disk part, etc... just cannot ever use them both no matter what I try.
See title. This one's probably really braindead, but I'm curious.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.