A list of puns related to "Network file system"
been looking around on the net but can't find this, something similar that can sync is freefilesync which kinda act as raid 1 but with no raid, not what im looking for.
a program generates file on a folder on PC1 which doesnt have much disk space. on PC2, I wish to set some sort of script to automatically check for files in folder on PC1 and automatically copies to another folder on PC2.
is this something possible?
I want to access / from Linux
IPFS can be called the Interplanetary File System. It is a new Internet protocol designed by Juan Benet in 2014. Its design goal is to achieve permanent storage of data, eliminate duplicate data on the network, and obtain data addresses stored on network nodes. It is an open source project.
The goal of the global peer-to-peer distributed version of the IPFS interplanetary file system is to supplement or even replace the hypertext transfer protocol currently ruling the Internet, and to connect all computing devices with the same file system together. Its files are readily available, and you can browse related movies and more.
In short, ipfs is to achieve permanent storage of data, remove duplicate data on the network, and obtain data addresses stored by nodes in the network. ipfs is also a global, peer-to-peer distributed version of the file system.
IPFS is great.
https://preview.redd.it/82b73c57tft71.png?width=1008&format=png&auto=webp&s=1d136cdf729c812f42835ca69d8a12733ec0bbc5
I'd love to have a network of clients on different computers essentially sharing files based on a master folder - say, an archive of content. When any new files are added to the archive, they are automatically added to the list for other computers to download and seed.
I understand the security implications so I figured if it exists, it's a very closed system. Anyone got any ideas?
Hi.
I am thinking about how I could most feasibly migrate all the users at my company to have independent encrypted ZFS datasets on a large central storage server in the most secure way and wondered if any of you fine folks have tried this before or have any interesting thoughts on the matter.
Here's what I want to do:
Every user gets a normal linux network account. (So there's LDAP and Kerberos and PAM support etc etc etc. NFS and Samba share support. You know, network accounts. auto.home mounts. The usual)
I want every user to have a network home directory (or separate data dir, doesn't have to be the home necessarily) that is a network mount of some form. That directory will exist on a seperate ZFS server from the user account source (which are in a separate clustered in a high availability setup) and each user account will get its own filesystem that is accessible from any domain computer. This setup enables us to do hourly snapshots for each and every user on each and every fs. Hallelujah. What a nice backup system that could be.
That's the more straightforward bit. Here's where it gets interesting: I want to do all that and make it so that it's encrypted at every stage in a manner that is systemically secure. Ideally, I want to use ZFS native encryption for this. And I want to do it on the current stable release.
So that means that the overall nethome storage pool needs to be encrypted with native encryption at the root level and the storage server system itself is therefore secure at rest. On a boot, a decryption key must be provided to mount the zfs pool that has all the interesting data on it. (This mounting process can be automated and brokered over the network to other machines as appropriate. The purpose of this layer of encryption is to prevent a malactor from getting anything useful from the physical theft of the storage server. That's easy enough and that part works fine - already built a testbed for doing that).
One of the very nice things about ZFS native encryption is that child datasets (each user having his or her own child dataset) can have their own separate encryption keys and we don't need to double or triple up on the encryption overhead. Ideally, each individual user dataset is encrypted using ZFS native encryption such that any systems administrators can't steal user files but can still manage the pools and ensure backups are working etc.
That's where it's so interesting as a secure home directory solution. A Kerb
... keep reading on reddit β‘Hey all, does anyone have tips on allowing a Powershell script, which is running under the local NT AUTHORITY\SYSTEM account, to save data to a log file on a network share? When I run the script under a user account it saves the log entry no problem, but when running as System I get an "access denied" result.
I've configured the Share permissions to allow access to Everyone, and the NTFS permissions for the log file allow Full Control to the Domain Computers group. However if I launch Powershell as the local system account I still just have read-only access to that file, and get Access Denied if I try to append an entry to it.
Thanks in advance!
This post refers to user permissions and sharing settings in a file system using NTFS permissions and Active Directory.
When a specific user is given permission or access to a specific folder, this is done by adding the user in the permissions settings. This means that the SID of that user is added with a specific set of permissions (read, write, delete, etc.). If this user leaves the organisation and the account is deleted from Active Directory, the setting with that SID will remain. This means that, potentially, someone with access to that SID could gain access to these folders. This is wrong for several reasons so I'll outline the common argumenst I've heard and my view in relation to them.
Someone could recreate a user with that exact SID
While possible, this is extremely unlikely as the SID is unique to each user. Even if we assume that this is possible, in order for you to create a user in Active Directory you need to be a domain admin, in which case you've already pwned the domain so that point is moot.
Someone could still use the credentials for that SID to perform actions in the networkIncorrect, if you want to use existing credentials to perform actions in the network you are going to be verified with Windows Authentication against the Domain Controller. The Domain Controller will not verify credentials for a SID that doesn't exist, so this is not possible.
It is generally good practice to remove permissions for users who aren't there anymore
Sure, but IT admins are usually swamped with other tasks too. Why spend time cleaning up stuff that doesn't provide any added security? The only thing you gain is that your permissions probably look a little cleaner when looking in the sharing settings, but that's a minor annoyance at worst.
This is something that I've tried to get answers for in other subreddits and other forums for some time now. It is a view that I actively want to have changed since everyone keeps telling me that it is a security risk, but so far nobody has provided any reason apart from "best practice" so please Reddit, change my view.
The Issue
This occurs with sshfs, as well as any file system implemented with FUSE. Dolphin is version 19.12.3 on Linux.
When using "Details view mode", Dolphin will explore sub-directories of the current working directory. The only columns I have displayed are "Name" and "Modified", neither of which are dependent on information gleaned from scanning sub-directories. I also have previews disabled for folders in Dolphin's settings.
To give you an example, let's say we have sshfs mounted at /mnt/sshfs
:
$ ls /mnt/sshfs
(dir) example
(dir) example2
test.txt
If we run dolphin /mnt/sshfs
and do absolutely nothing else but look at the contents of the screen, in the background, Dolphin will scan the sub-directories of the current working directory. Meaning, Dolphin will scan example
, example2
and so on without the user exploring the directories themselves. I was able to confirm that Dolphin was doing this in the background by logging file system access and using Dolphin.
This is particularly expensive for network file systems, especially with sshfs, as upload speeds for consumer connections are very low compared to their download speeds.
This can also be a literally expensive operation if the file system is a FUSE fs over something like S3.
If the current working directory contains tens, or hundreds of folders, it could degrade their experience as the content is loaded in the background.
It's also unnecessary wear on a disk.
For my use case, this impacts the users of one of my FUSE apps. In the app, I set a rate limiter in front of a web API. When Dolphin opens things in the background, that rate limit is hit with no interaction from the user at all. When users then do go to interact with their systems, they are rate limited because of Dolphin's high background load, and they have diminished experiences.
I understand that the "Size" column will count items in a folder, but I believe you could turn that feature off at one point. I also remember being able to turn off pre-fetching of data from network volumes.
"Compact view mode" and "icon view mode" do not cause Dolphin to load sub-directories, which is what I believe to be the correct behavior, and the behavior I believe "Details view mode" should implement.
--
I think "Details view mode" should behave like "Icon view mode" and "Compact view mode" by not exploring sub-directories in the background.
I think users should be able to set whether or not Dolphi
My workplace keeps all of its data on a server running Debian that I access through putty on Windows. I also have this server mapped as a network drive on my Windows system, so when I'm connected to my workplace's wifi I can freely move files between my Windows laptop and the Debian server.
But how does this work? If these two operating systems use different file systems, isn't it weird for the file operations on one system to work seamlessly with the other?
Back in the day I used to share files between two computers using a serial port and null modem. Is there any equivalent to doing this now'days? I know USB isn't generally used for this as it's host only and connecting a USB cable between two computers can fry the USB hub, but if both go to a common USB Hub is there any software in Linux that would allow this?
Thanks --
Hello datahoarders. I'm new to all this hoarding game but excited to get started. I've researched a lot about server configurations but just wanted honest recommendations from people who have actually made this thing and used it for long periods of time.
Is a associate degree and certs good enough to get a job as a Systems, network or cloud position in the IT Filed. I'm young 22 and I have my comptia A plus cert. I hear it's good to have both a degree and IT certs in the IT filed. So I want to get a associates degree then eventually go for a bachelor's at a later date. Can I get a job as a cloud Admin or cloud engineer with the combo of certs of AWS , CCNA and on the job experience without a degree. I still want the associate degree in Info systems eventually.
Hi all, At work we are using nfs for our network file systems on Linux. It has some security flaws so they restrict the access only to machines maintained by them. Is there a good alternative to nfs that is (almost) as fast, but with better security? Sftp would be secure but it is slow, samba has no linux file permissions. What are other alternatives? We have ldap based login, we could use kerberos, ssh keys, but any thoughts would be useful.
Thanks
We all share the same network in a βpeer to peerβ format, there are 10 computers and 4 printers connected together. We store all of our file information on one computer we call the βserverβ. When I bring up the Task Manager on this computer storing our files the disk usage is almost always at 100%, Iβm afraid of losing our data from overworking this drive. A few questions.. Is this βpeer to peerβ network the proper way to have this set up? or should we be using the Homegroup feature in networking instead? Is our hard drive storing the information in danger of crashing? do I need to upgrade our drive to have a faster rpm or solid state to reduce the usage? How can I assign different users to groups so that I can file share with only specific users? Any information is much appreciated, let me know if there are any easy to understand resources out there when I can read about this. Thank you in advance!
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.