A list of puns related to "Home Directory"
Something really annoys me. So, we have the standard XDG_CONFIG_HOME, which is ~/.config.
Then, why do we have ~/.xmonad ? Why do we have ~/.xmonad ? Why do we have ~/.Xresources ? I get it, there are historic reasons, like .profile or .bashrc that have been there for SOO long, but ~/.minecraft ? What even is the reasoning ? Or .gitconfig. Or, .steampid, which is a symlink to .steam/steam.pid. I mean, we could just use .config, .cache, .local and .history, and everything would be alright.
My current home directory currently has 48 dotfiles. This is huge. This is nonsense. There is .local, there is .config, there is all those things that nobody really respects.
I really have this feeling, that there are literal TONS of standards, and everybody just decides to say "no, I won't respect that". It may make sense SOMETIMES. If you edit some files a lot, having them closer to the home directory might be useful. Or, xterm way of handling +/- options allows very short program calls, yet powerful.
This frustrates me. If Linux is supposed to be consistent, how are those things even real ?
Edit : I mean, this is a thing : https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
Edit 2 : I take some things back, some directories make more sense in the home directory. I feel like splitting .gnupg up would be completely stupid. Though, I think that .cargo or .rustup should just put themselves in .local
Need to find all old user accounts who have home directories?
How about old directories that have no user?
My script will scrape through your user share and find all old accounts and folders that have no user. You can also, move, delete, and get the folder size of each user found.
This is my first public script, please give me any notes you feel are needed.
Hello, I hope youβre doing well.
Iβm completely stumped trying to install fedora and use an existing home directory. Even when I get the existing home directory mounted and have edited fstab, I am unable to login to the users that exist on the home directory, even though I can view their files.
Is there a trick for this? Thanks!
Hello,
I'm looking at a FOSS alternative for NextCloud, but it has to be very specific:
It needs to use already existing local Linux users accounts and home directories when a user logs in. In other words, when a user logs into the web frontend using the same Linux credentials they use for local logins and SSH, the app will serve the user with the files in their home directories.
No LDAP please! Just local Linux authentication just like SSH does.
Is there any such web app similar to NextCloud that seamlessly integrates with Linux users and homes?
So...
I was writing an installation script for my GitHub project and at some point my script was supposed to make a directory for my program in ~/.local/bin/MyProject but instead my script created it in the project folder, so it looked something like this: MyProject/~/.local/bin/MyProject.
I thought, "Okay, let's delete this folder and rewrite the script to make directories in the correct place". But as I typed "rm -rf ~" and pressed enter I knew I fucked up :( I started spamming Ctrl+C, but it was too late and about half of my home directory was gone.
So, I think I will have a long evening recreating my config files and other setting :`)
My lesson is to first think then press Enter Edit: And also have backups!!!
I've got a React/Electron app, and can't seem to access Node fs. I've tried import fs from 'fs'
, I've tried import * as fs from 'fs
, andI've tried const fs = require('fs')
and all give the same error. Why is it looking in src instead of node_modules like it does for React and Axios?
Here's the full error message:
Module not found: Error: Can't resolve 'fs' in '/home/tucker/Hub/Dev/jot/src'
Recently switched jobs to a place with ~1500 users and infrastructure held together by chewing gum and optimism. A recurring issue/alert is space usage on SMB shares, split across several servers (vmware guests, backed by netapp storage).
Users have a home directory mapped in AD at creation, and those are somewhat heavily used. Currently they're all hosted on a single server with a 3TB drive which runs about 80% full. The shares are actively utilized by about half the user base, and the drive floats around 85% full, spiking to 90 and triggering a cleanup script to archive and move off any terms. There is no storage quota set.
I'm trying to figure out what the best way to wrangle this is, we're expecting headcount increases in 2022 and our budget is going to go to refreshing some of our very, very old hosts. Are there best practices surrounding managing AD home folders in large orgs, or something obvious we should be doing? I intend to configure a storage quota but outside of that not what else to do to try and wrangle it, or how to effectively split it between drives/servers (and our underlying storage arrays are nearly scoped out as it stands).
I know that ls prints everything in the directory but is there a way to print files and directories separate?
I've set up my system with BTRFS and Snapper for rollback. However, whenever I create a snapshot the /home/ folder within the snapshots seem to be empty.
I've created a root config file which is is mounted at the /@ subvolume:
snapper -c root create-config /
Any ideas? The files within the root directory are shown as well as /var_log. Only the /home/ folder is empty.
I'm planning to buy my mom a Backblaze subscription for Christmas, and take advantage of the 3-month referral bonus. I installed the 8.0.1.572 client on her ancient iMac running Catalina (10.15.7) and kicked off the initial backup.
After the file scan, it reports 109,953 MB in 94,006 files to backup. However, there should be nearly half a terabyte and half a million files to backup. Her home directory alone has 391 GB and almost 150K files.
According to the exclusions list, there are a bunch of paths that the client simply will not backup. I cannot remove them from the list. Incidentally, the warning dialog pops up twice each time I click on one of the list entries. I do understand that Backblaze is not designed to create an exact image of your drives, nor can one rely on it to restore a bootable OS. But there seems to be a rather large gap between what a user expects to be backed up and what the client is actually backing up.
Is there a reason why Backblaze won't backup those directories on a Mac? And is there a way around it? I'd rather not have to attach an external drive to this iMac, rsync home directories and the Applications folder over, and then have BB back that up...
UPDATE: Noticed that the Reports tab shows roughly the expected amount of data to backup (the most important being the photos and videos). I'm still wondering why the client is only backing up about a quarter the expected number of files.
UPDATE 2: Been busy with family stuff over the holidays, but I'm pleased to report that things have sorted out themselves. Here's what happened. I had left the iMac running while we were off doing Christmas-y things. I was also setting up local backups for my mom, and at one point I wanted to test the clone of the boot drive. That worked fine, and I rebooted again with the original drive. Of course, this means the Backblaze client restarts and rescans. This time, it correctly discovered the 500 GB of files. I pruned that back a bit to a more manageable size, and now things are running well.
The issue with /home
being on the exclusion list is a red herring. That appears to be an automounted filesystem that doesn't have anything mapped to it. The actual user home directories are in /Users, which is indeed being backed up by default.
Hope this helps others who run into this in the future!
I accidentally set all of my files to be executable, but i cannot revert it. When i type "sudo chmod -R -x ./*" it does that and i do not have any access to the files in my home directory. Can anyone help me out here?
I kind of messed up my system. But I think I got my encrypted files off of my drive. I have Mint OS on one SSD and I used to have an encrypted /home on a separate HD. If I replace the second HD with an unencrypted drive what do I have to do? Edit the fstab? Do this from a live CD? Will LM recognize the drive or is it expecting to see an encrypted drive?
Thanks
How do I fix this error? I am running Ubuntu and have rebooted, but I cannot get the node to start. It's been working fine for weeks and I am guessing that I messed up something, but cannot figure out what.
EDIT: I have recently installed VNC, if that helps.
EDIT: I have verified that if I disable the VNC during reboot, dogecoin starts fine. The problem of course is that I need to run both. Is that not possible for some reason?
https://m.youtube.com/watch?v=MHsI8hJmggI&list=WL&index=3
I think I'm either doing it wrong or having the wrong syntax using "mv"
Hi, I have a dockerized service that I need to run that I start by running a bash file. First, I will have some context. If you want to skip this context, you can go directly to the heading called "issue"
I have the following files that I copy into my docker container:
βββ files
βββ flag.txt
βββ monster
βββ start.sh
βββ t.py
and then my Dockerfile ends with the following command:
CMD ["sh","/home/bob/monster/files/start.sh"]
So the bashfile is at the path "/home/bob/monster/files/start.sh", so I simply wanna run that file.
My bash file looks like this:
#!/bin/sh
while [ true ]; do socat -dd TCP4-LISTEN:9000,fork,reuseaddr EXEC:./monster,pty,echo=0,raw,iexten=0 done;
and this basically binds the CLI program "monster" to a port and thus serves it such that people can netcat into it.
So, if I build and run this container, everything looks fine. But when I then netcat to the exposed service, I get the following error on the container:
2022/01/01 22:59:53 socat[9] E execvp("./monster", "./monster"): No such file or directory 2022/01/01 22:59:53 socat[8] W waitpid(): child 9 exited with status 1
So it looks like it can't find the "./monster" file.
So, I tried running the docker container with the "-it" option in order to look around in there.
I then cd'ed to the directory that contains the bash file that I would like to run:
root@3a06d823eb68:/home/bob/monster# cd files
root@3a06d823eb68:/home/bob/monster/files# sh start.sh
Running it works fine, and I can netcat to it also, which works.
I then try to move up one directory, and try again:
root@3a06d823eb68:/home/bob/monster/files# cd ..
root@3a06d823eb68:/home/bob/monster# sh files/start.sh
2022/01/02 00:13:50 socat[18] E execvp("./monster", "./monster"): No such file or directory
2022/01/02 00:13:50 socat[17] W waitpid(): child 18 exited with status 1
But when I do this, I run into the same error as before.
The way I understand this, the issue is that when the file is called from the home directory, then the relative paths in the file is queried in the home directory, where it does not exist.
So, How can I still use relative paths in my bash file, but still call the file from the home directory?
I'm looking into a set up using tmpfs for a clean root on every boot. The specific implementation I'm looking into would require immutable users. I would have /home on a partition and would like to make sure accidentally making a wrong config change won't accidentally nuke my home directory.
Thank you for your time.
Hello everyone!
I am termux newbie. I want to be able to "manually" add files to /data/data/com.termux/files/home/.
However i don't see anything resembling this folder in file manager. Android folder has only media folder.
(I use boox note air 2, file manager app called "storage")
-SOLVED-
On my PC, the owner of my home directory is root. I've tried using sudo chown, but it still stays as root. I even tried running Nautilus as root and changing it that way, but the instant I change it to my username it goes right back to root.
EDIT: I later realized that this was because my home directory was set to my 4TB hard drive, which wasn't correctly configured in FSTAB. I changed the FSTAB configuration and now I'm the owner of all my files and WINE is working now
how can i remove the Sync
directory, as i do not use it.
i am using linux
thanks in advance :)
On my Linux desktops, I feel like I am constantly editing my exclusion files for my home directory backup jobs, as I start using new applications or as applications get updated, more and more crap ends up in my home directory that I don't necessarily need to backup.
Even on a brand new Linux install, things popup in my home directory and I just think "WHAT THE HECK IS THAT?" (like ~/.nv for nvidia, I've never seen until my new laptop)
Then there is stuff that is obviously temporary data (mostly in ~/.cache but you find it all over the place) Seems like there's temporary files all over the place. Then there are also easily replaceable files. (Stuff that can easily be re-downloaded from repositories when you reinstall an application, etc)
I have a mostly manual checklist of things to do when I install a new Linux desktop system. This is because usually, so much has changed since the last time I did a fresh Linux install, that just copying over the home directory isn't feasible when so much changes from version to version of a Linux distro. (Not like Windows is any better in this regard)
So this got me thinking. Is this something that should probably become a new standard. Every application can put a file somewhere that defines what files are important and which are not. Perhaps they can even include a script to determine what files are important and which files are not.
Perhaps they can even categorize them into folders such as:
Then within those folders, each application can put scripts such as "firefox" "gnome" etc.
The possibilities and flexibility are endless.These folders can be stored in places like /etc/backup and ~/backup or ~/.backup or someth
... keep reading on reddit β‘Hello everyone!
One of the reasons I use Flatpak is that I can prevent applications from cluttering my home directory with custom folders or configuration files and from writing into the "global" dconf configuration and executable files, like my bashrc/zshrc. Therefore I can uninstall any application with flatpak uninstall --delete-data id.app
and there's no trace left behind.
To achieve this I am using, among other things, a global override to deny all applications the permission to access the host, home and xdg config directories: flatpak override --nofilesystem='home' --nofilesystem='host' --nofilesystem='xdg-cache' --nofilesystem='xdg-config' --nofilesystem='xdg-data'
Unfortunately applications can still gain access to arbitrary directories by explicitly specifying them in their manifest. E. g. Minigalaxy grants itself access to --filesystem=~/GOG Games:create
and with my current solution I need to detect this (flatpak info --show-permissions id.app
) and then override it manually for this specific application. Unfortunately when updating (e. g. automatically via GNOME Software) applications can add new permissions without requiring confirmation, therefore this is not a proper workaround.
As far as I see it, this makes the global override completely useless, as applications can still silently gain access to any directory on my computer, including write access to my bashrc/zshrc (= leaving the sandbox).
Is there any way to prevent this from happening? Is this a bug in Flatpak, which should be reported?
Thanks in advance!
Hi iβm a new pop_os user! I realised some time after the installation that all my applications were installed in the home directory where i have about 8Gb of space whereas i need much more storage and my root folder is about 250Gb, problem is i can only write there from terminal and sudo privileges and instead iβd like to install applications there, is there a way to solve my problem or should i just reinstall pop_os and give the home partition all the space?
I have a home network with several workstations and just a few users. In order to keep home directories synchronized I'm thinking about putting each users home directory in a place on the server and then at boot time mounting that place on the user's local /home/$USER. What would be the downside?
I have done a bit of reading and my UUID's are listed as the same in both arch and mint, so it should work ok I think.
My concerns are that mint boots 5.04 kernel and arch is 5.15. . . . Will the dot files be in any sort of conflict?
And the other difference I can think of is arch is systemd boot and mint is grub.
Mostly the forums say it should work but I figured I would ask if there are any "gotcha's"
Thanks for any info.
I am attempting to clean up my home directory, and attempting to move many of the odd files to my XDG Base directories. I have knocked out many of them, but I am having trouble with a couple directories. The main one I am having trouble with is the ~/.icons
directory, I cannot find an environment variable which corresponds to this directory. Does anyone know what that variable might be? Its annoying me at this point that I cannot find it. What I mean by this is: Is there a variable, say $DOT_ICON_DIR
that I could change to some folder in the .cache
directory?
For a more general Question, how could you go about finding the environment variable one of these config folders belongs to?
Hello,
I must have done something. My home directory now has all of these . folders ( .gnupg, .ecryptfs, .sudo_as_admin_successful, .xsessions-errors . . . . ) What could have caused this and what could I do to fix this so my home directory isn't filled with these . folders.
The only thing I didn't differently on this install was add the flatpak version of Firefox and add a user.js file and new profile, installed Tiger via software manager, and rkhunter via the software manager as well. I've since removed Tiger and Rkhunter, and the . folders are still there.
Any help is greatly appreciated !!
Thanks !!
Upgraded to DSM 7 last night and now see /home and /homes folders. I am learning their purpose, no issues with their presence.... But workflow-wise do people typically keep all their files in /home? Because I created a shared folder to basically house my entire digital file library. It is mounted to a letter drive in Windows 10. Just curious what the purpose of /home is? Is it just for apps to use - e.g. synology photos uses /home/photos? Is there any reason I would want to migrate from my root shared folder that I have been using to /home?
When I wasn't running flakes:
I kept my system configuration in /etc/nixos/configuration.nix and my home manager configuration in ~/nixfiles/home.nix .
And then I stuck all my user specific config files in ~/nixfiles/config/
Switching to flakes, it seems that the few setups I've found have stuck everything in one folder, usually a user folder (like ~/dotfiles). And shell scripts to manually force updating flakes and system configuration.
This seems counterintuitive since that would require putting a system configuration in a user directory. The alternative would be putting user configuration into a system directory.
While having everything in one directory is nice for easily uploading to git, I'm not sure if this is the best implementation. It would be nice to have additional users be able to setup their home system independent of the system files. (Even if I usually only have one user on my systems at the moment).
Should I be concerned about having user and system configurations being together? How do you set it up?
Thanks!
So my wife recently switched to Linux Mint from Windows 10. Being able to set your default folders is something she used on Windows 10.
I think Nemo should be able to do this. Under properties you can see that /home is being used.
I just don't know how to edit it.
Hi everyone,
we are rolling out a customized Linux in our company. I'd like to prepare a template $HOME for our LDAP users.
The way I customized the desktop was to prepare everything the way I wanted in a VM, creating a tarball with some excludes and integrating it into /etc/skel of the next ISO build. That way I can iteratively modify the user template. This procedure works just fine in a system where the user is local and the username hence can be fixed.
Customizations contain
However, now I want to apply it for the new multi user scenario. I find that it wont work since $USER tends to get included everywhere (absolute paths, etc) which is not surprising at all.
I once tried to a simple search&replace, and while it seemingly fixed some things but e.g. not the chrome profiles it is obviously an ugly as hell hack. I assume the user name can somewhere be hidden in database files or similar binaries which you cant patch this way.
Are there any tools addressing this task? Is there an approach to cover all apps and not write template scripts for each one of them?
Thanks in advance!
My current desktop setup: Linux Mint 20.2 Cinnamon on a SSD.
/home (is private) on a separate physical HDD. I don't have links or anything complicated.
Santa (Thanks!) is bringing me a new 1 TB SSD. I would like to partition that drive into 100GB for OS and the rest for my home directory. This is how I would like to do it:
Disconnect my drives, put in new SSD and then do a clean install of Mint. Maybe with the Logical(?) partition and an encrypted /home directory. Then connect my old /home HDD in a USB docking station, and copy over all my files (show hidden files) to the new SSD /home partition.
Is this possible? Any problems now or later on? Does my /home directory contain just config files or Google Earth, FreeTube, etc? Will I have to install all my programs again? I'm somewhat comfortable using the terminal. I would like to preserve permissions, dates, etc.
Thank you
Hey y'all, I'm running into this issue. The solution is to remove the directory ~/.cache/nvim/hotpot
. I would like to automate this when I upgrade my home environment packages as the issue seems to happen after a home-manager switch --flake <flake_path> --recreate-lock-file
operation.
I've tried putting this in my home.nix
:
# clear hotpot cache on upgrades
home.extraProfileCommands = ''
HOTPOT_CACHE="${config.home.homeDirectory}/.cache/nvim/hotpot"
if [[ -d $HOTPOT_CACHE ]]; then
${pkgs.coreutils}/bin/rm -rf $HOTPOT_CACHE
fi
'';
But it hasn't worked :( Any suggestions? Thanks in advance!
So I was an Arch user until yesterday when I did a system update and everything completely broke down, so I switched to Fedora. I had the home directory in a separate partition so I didn't lose anything. Anyway, I've heard that the "official" way to install Steam on Fedora is through flathub. Because on Arch I had installed Steam through the AUR, all my games are currently in my home directory. I was wondering if there was an easy way to move my games from the home directory to the flatpak directory. Even better would be if I could just move all my Steam configs.
UPDATE: I've decided to just install Steam through rpm fusion
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.