A list of puns related to "Disk image"
I started getting this warning today, but I switch to using direct drive space a while back, so I'm not even using the image file. What's going on?
Here are my containers:
docker system df -v
Images space usage:
REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
linuxserver/plex latest 73df5fc06491 41 hours ago 625MB 0B 625MB 1
lscr.io/linuxserver/tautulli latest fd99acec92ae 2 days ago 120.3MB 0B 120.3MB 1
binhex/arch-qbittorrentvpn latest f439be84b672 5 days ago 1.028GB 0B 1.028GB 1
pihole/pihole latest fab3debf57da 6 days ago 301.3MB 0B 301.3MB 1
lscr.io/linuxserver/overseerr latest a856b9c6ca9a 7 days ago 498.6MB 24.8MB 473.8MB 1
hotio/readarr nightly 5f9c45cf666a 8 days ago 301.6MB 5.581MB 296MB 1
registry.hub.docker.com/library/postgres 14 07e2ee723e2d 8 days ago 374MB 0B 374MB 1
jlesage/nginx-proxy-manager latest e1d993c0c144 9 days ago 185.6MB 5.581MB 180MB 1
binhex/arch-prowlarr latest ffc3be2922c9 13 days ago 864.9MB 0B 864.9MB 1
lscr.io/linuxserver/nextcloud latest 649370fb872a 2 weeks ago 438.8MB 24.8MB 414MB 1
factoriotools/factorio stable 53e1a8c2f86c 2 weeks ago 292.2MB 0B 292.2MB 1
lscr.io/linuxserver/scrutiny latest 567b73b96cd0 2 weeks ago 90.77MB 0B 90.77MB 0
lscr.io/linuxserver/ddclient latest 14b44ffbe4af 3 weeks ago 77.96MB 0B 77.96MB 1
ich777/steamcmd satisfactory ccf8f6ac4bdd 4 weeks ago 131.5MB 0B 131.5MB 1
hkotel/mealie latest 5308ab71d5a2 5 weeks ago 385.1MB 0B 385.1MB 1
binhex/arch-sonarr latest 02889fd85f9d 3 months ago 979.8MB 0B 979.8MB 1
bin
... keep reading on reddit β‘Hi all! Sorry if this is something super basic, but I'm very new to all of this. I just got an Apple IIc Plus and I've been archiving some old disks I found over ADTPro. These are probably things like text files etc & I wanted to know if there was any way to separate the files from the images so I can archive the files separately.
The disks I'm transferring are 3.5" DD disks & they are being made into .po files
Also - second question, but is there a good utility for turning disk image files into other disk image file formats? A lot of the disk images I'm finding online are not formats that ADTPro can use.
I've seen a few posts on here that if you were skim reading, you'd copy and paste a command that would wipe your primary drive...eeek.
So, I'm posting this in the hope that it drowns out some of the dangerous advice.
dd is great for writing usb disk images. Use this:
dd if=image.img of=/dev/null status=progress
where you replace /dev/null
with the name of your usb drive (if you need to find it, the df
command is a quick way)
I set up my raspberry pi the way I want it, and I want to make an image of the sd card so I can easily flash it again if I want to. The problem is that while only a few gb are being used, the sd card is 32gb, so if I just use dd i'll get a 32gb img.
I know I could just make a tar of the whole file system, but then I'd have to partition it, mkfs, and untar, which is more work then just flashing. Is there any way to make an img that's just the size of what's being used?
EDIT: clonezilla seems to be the best way, thanks!
I am receiving a SCSI2SD adapter for Christmas, and I wish to use it to boot Linux on my Macintosh SE. I've read the documentation for Penguin booter and EMILE, but I was wondering if you knew of/had any premade disk images I can flash to the card and boot.
I'm not sure what is going on, but this is the second week in a row that I received this warning. I can't figure out what is running at 2AM either. Any ideas on how to troubleshoot this? My docker size is 20GB.
https://preview.redd.it/wwhq08odpb781.png?width=360&format=png&auto=webp&s=ae213be3f6f902c37873e172ceb58233e0e72a9b
I have been having issues with NFS and slow read/write speeds in Proxmox from my TrueNAS system. It is only in the disk image NFS share I am getting these slow speeds.
If I switch the pool over to SMB, will I have the same issue? I want to test it but also things work just fine, its just when starting up a VM, that I get slow startup and such.
I wanted to have a safe folder in my computer protected with a password so I used Disk Utility to do so. It created a dmg file and it was working fine.
Today, I wanted to open the folder, but it was an alias instead of a dmg. And I can't open it because "the original item can't be found".
Is there something I can do?
Like many of you I followed the Retro Game Corps tutorial to set up my RG280V with Adam image. But two questions I can't seem to find answers to, both related to multi-disk psx games:
What am I missing? Thanks for your help community.
So, I want to be able to create images of external hard drives in my Linux machine.
For example, I take a hard drive, put it in a USB case, plug it to my Linux computer, and then image that drive to a file (that can be saved in a local partition, or in another USB device).
I know about clonezilla, and other solutions, but I want to do this operation from inside my Linux session. I don't have to boot into a live environment, because I just want to create an image of a foreign disk (i.e. I don't want a backup of my actual Linux installation). The typical solutions that I found almost always involve a live CD or a live environment, which is perfect if I want to backup or restore my entire computer, but not practical if I just want to backup an USB disk.
In Windows I've used Macrium Reflect, which allows me to quickly create an image based off an USB disk, without leaving Windows at all.
Any pointers or suggestions? Thanks!
So I recently acquired a fairly unique Dolch 63c system with all it's network sniffing software. I did a disk image of it, and am wondering where to upload it for people who may want to access it
Good evening Reddit,
We have a pretty big application deployed on GCP. Given its increasing complexity we have decided to start (learning about and) using Terraform.
Currently, we have multiple MIGs for services that need to be scalable (k8s would probably be better for that long term?), some ML models on Vertex, and everything is duplicated x2 to have prod and staging environments. That and load balancers, VPCs...
So we're very happy to discover Terraform, it looks like it'll help limit the number of clicks in the GCP interface, which I'm quite literally having nightmares of.
Getting to the point: we use MIGs to deploy some extra-large ML models using instances that boot from custom images. These images are usually Ubuntu server + tons of custom stuff (CUDA, libs, monitoring...).
Is there a way to automate the creation of disk images? For now we run scripts but we are hoping for a stateful way to define what we want eg: Ubuntu-16.04, conda, CUDA, ...
Can Terraform be used for that? or can any other tool?
I do IT for a small company and we have 6 iMacs that my supervisor would like an image made for them. Any suggestions on software? Iβve tried disk utility and it seems to be a pain.
Hello libre users
Could you recommend me a software to making disk image, is there any soft which can makes image without boot this software before os, just run inside os? I don't want to switch boot option just to make disk image I would to do that inside os.
And next part of topic, any cool libre simply software to checksum, but here I need something on windows.
Thank you all for your time.
This might be a dumb question but do disk images include deleted data? In other words, do disk images duplicate every bit from the drive (even data that is considered βfree spaceβ). Iβm trying to create a disk image of an ssd and save it to an external hard drive. However, I want the disk image to include deleted data (assuming it hasnβt been overwritten). Is this possible or should I go about this in a different way?
Was wondering how the pros did it.
Or should I just use Kali? I assume the VPN isn't meant for your main system. Thank you, I only started recently and I'm trying to avoid running a VM in browser
I am trying to upgrade to 21.10. The download itself is done, but for over an hour now it's saying "Syncing recovery image to disk".
Is it safe to proceed with the upgrade? Or should I do something else first.
Any help is appreciated. Thank you.
Title says it all. I have tried downloading Saber multiple times in case a single download somehow got corrupted, but every time I try to install it I get this message. Google was no help, anyone know why this could be happening?
I recently tried stacking 1000 shots of M31 and ran into the following:
1- Only 552 images actually stacked as the registration failed on the remaining 448. Is there some setting I need to change? From the log the registration just says it "can't find putative star matches" though I can clearly see M31 in all my shots with some surrounding stars. I've attached links to two images (one that stacked and one that didn't) as well as the output log below.
2- The 552 that DID stack is taking up 510 GB of space on my SSD between the calibrated/debayered/registered images folders... Is there a way to save fewer files or use less disk space when running the stack? Like just have it be temporary files and then delete them? I'm tired of clearing space on my SSD every time I want to stack.
Any help would be appreciated!
Links:
Side by Side: https://drive.google.com/file/d/1yHhMvGceGnWVMfRKzFeq_yXrTakEM57W/view?usp=sharing
Image that DID stack: https://drive.google.com/file/d/1Jn4eRTJTjDKpNd2jf1WmsEBLMF5eyvIC/view?usp=sharing
Image that DID NOT stack: https://drive.google.com/file/d/1tknH3OVNMkTlRQZFNUHHysYXsJBO3RTu/view?usp=sharing
Output log from WBPP: https://drive.google.com/file/d/1GodqpkPuJKUFu3i-0YfO6MWoPBDoNiSN/view?usp=sharing
I'm trying to recover a 2008 backup of a photo library. It was backed up across twelve DVDs, with the content of each DVD being in a file named "backup.sparseimage." First I copied the DVDs to the NAS whole, then I tried opening the image files within the Synology OS, but it couldn't open them. So, I am forced to open the image files on my Mac and use Finder on the Mac to copy the files out of the images. It is going impossibly slow, but not just the copy operation, every aspect of the process. Even loading the folder index in Finder takes several seconds for each folder, and there are thousands of folders.
Meanwhile the drive is chattering up a storm almost 24/7. I recall reading in here a while back that some drives are particularly slow because of scalloped recording, and I am suspecting that the shucked WD Elements drive I recently popped in is like this and may be slowing down the whole system. The other two drives are Seagate Exos drives, which I bought directly from a small Synology dealer, so I would've assumed they were fast drives, but maybe they aren't?
I'm just wondering if the problem here is the drives I have installed. I hate having to throw money at such a relatively trivial problem, but I do occasionally have to deal with large folders (the whole reason I bought a NAS in the first place) and I just can't deal with how slow this system is.
Should I swap out the WD drive for a better quality NAS drive, or would I be wasting my time with that step?
I don't understand why this isn't literally just double clicking the .img and putting in my password.
Here is what I have done:
How would I do this? I think for whatever reason me accessing this stuff from an LVM/encrypted Ubuntu installation might be messing things up. Most of the errors I have gotten are that the device is already in use (with no other information). I have successfully been able to mount the .img onto a loop device and then split that loop device into automatically-detected partitions, but it still won't mount. I am correctly using the third one. Should I try from a live disk then extract to another hard drive?
I am not finding good articles on caching the images on disk (persistent caching) efficiently.. is there a good approach on doing them? I found this one article but since it uses PromiseKit, i am not able to convert it into combine properly
https://levelup.gitconnected.com/image-caching-with-urlcache-4eca5afb543a
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.