A list of puns related to "Clustered file system"
I am playing with docker swarm, stacks and portainer.
I would like the portainer management service to move node when the master changes (say because the master failed),.
To preserve settings I either need to :
Anyone implemented something like this already?
As a sysadmin what you prefer in Linux compared to Windows Server and vice versa ? No religious war please advanced concept differences between advanced tech config, AWS, Azure, API and Active Directory, File Sharing, clustering, containers, powershell/bash scriptings, System Center/ Ansible etc or something else... like what work better for you in 1 compared the other or your company
Hey guys, we are in the middle of migrating from HyperV (2016) to VMware vSphere 7. The VM migrations (V2V) are going well and not a problem. I'd like to see what you guys suggest for the Clustered File Server roles that we need to migrate. We need to maintain HA of those roles, so it seems like we need to nest a Windows Failover Cluster using Windows VMs on the VMware cluster. Is that the best way to do this? Is there any Native VMware method for this thats better? How do you present an ISCSI LUN directly through to these VMs in VMware? Any tips and tricks for getting the config from the source to the destination? Has anyone leveraged the "Storage Migration Service" from Microsoft to migrate config from source to destination? Or would you just build fresh and copy data with Robocopy or something? Thanks in advance for any advice.
It is time for a new file server. First one in over a decade, and I am not sure whether to virtualize the server(s) or cluster them. We have a new fancy wiz-bang 3-node failover cluster that we want to host the file sever(s), but I have a few questions. Given that we expect to store less than 5TB of data, and the cluster is currently underutilized:
First, what are the pros/cons of running the file servers as virtual servers in the cluster (on CSV) vs running them as Clustered File Server(s) using the General Use File Server role.
If we use the Clustered File Server option, and since only one node has access to a Clustered File Server at a time, would it be best to have more than one file server (ex. MarketingFS and BillingFS) in order to split the I/O between nodes?
If we virtualize the file server(s), would there be any reason to have more than one?
Thanks in advance for any feedback.
I have a ssd with multiple partitions. One of them is mounted on ext4 FS and the other is mounted on ocfs2. I would like to compare latencies for these two. I understand that ocfs2 will perform worse than ext4, just want to quantify it. What tool should I use and how? (ebpf, fio, perf)
One of my clustered servers is experiencing HUGE lag, and I need to transfer off, but I can't get to the OB because of the aforementioned lag. Can i do it with admin commands?
I have four sites that all operate on dynamically addressed public IP (DHCP) from my service provider, and after a power outage in the region from a storm, the addresses change which is really bad news for me because then I can't reach my servers...
My servers at each of the three locations run a cron that sends their private and public IP address, WAN link speed, resource consumption, etc, to servers at each of the other locations. So once I phone up one of the locations to get the latest IP, I can often get the other two after about half an hour of DNS purge...
However, I would like to know if there is a FOSS linux application that can more or less form stable and reliable bonds with a de-centralized web interface accessible at each location. Perhaps that way I could update settings on one machine and all the others would read the latest config file version as well. This should be a very basic program that is simply able to connect to other nodes, share up to date config files (which would allow the addition or removal of nodes and maybe some web preferences), and most importantly report the status of itself and other machines (including the detection of IP changes for its own WAN). Thanks everyone, sorry googling this topic is lousy, I get a bunch of ads for IP protection and Cisco marketing.
It's your controller coming unplugged right as you drop into a new system.
I have no idea what the keyboard controls for this game are, but ESC + up + enter works delightfully well.
Hi,
I recently stumbled across this awesome subreddit and wanted to ask for some advice.
The main question for me is: Which distributed file systems would be suited to store large amount of small files in a kubernetes cluster? I saw that you can dump for example wikipedia articles and have around 2TB of data and around 5mio files Id like to get access to via a distributed filesystem and maybe do a small web-app in which I can search over all the files or upload new files.
I try to understand the topic of big data and dfs a bit more and am not a professional working in devops rn, I study cs and just got interested in that topic. I started to read more papers about infrastructures like hdfs on hadoop for example and that larger amount of files can cause performance issues since each file will be stored in individual blocks ranging from 128MB to 256MB in size and since namenodeβs memory is limited it could be a problem, if I understood that right.
I found distributed filesystems like hdfs, ceph, lustre or filetypes like hdf5 but have to check these more out to write about that tbh.
Id love to hear of your experience with maybe a similar topic and like to learn from your guys experience.
have a great day
I'm fascinated by binary stars, they just look so pretty, I was wondering what the highest amount of stars you guys have seen clustered close together like that. I know 64 Piscium, I believe, has like three stars in close orbit, anybody know of more?
First things first, this is an almost complete Ansible noob here. Every week I need to restart a number of services that together form a cluster behind a single LB. The way to do this is restart them one-by-one, waiting for the line that signifies that the node has joined the cluster, and then going to the next one. Is there a good and sane way to automate this using Ansible?
Hello,
I am migrating a Server 2012 R2 clustered file server to a standalone Server 2019 using Robocopy, no issue with actual data migration or shares etc.
So we are talking similar thing as this https://docs.microsoft.com/en-us/windows-server/failover-clustering/deploy-two-node-clustered-file-server
After the cutover, I want to use original AD Computer object name "fileshare.contoso.local" in the new Server 2019 so there won't be IP/DNS issues for the clients when accessing the shares. (the current "fileshare.contoso.local" object has that automatic description in the AD object "Failover cluster virtual name account")
So my questions regarding the cutover/rollback are:
How can I disable or stop the clustered file share so that it won't try to regain that fileshare.contoso.local AD object, like is it enough simply to stop the role in Failover Cluster Manager?
If I need to delete the "fileshare.contoso.local" role for the cutover, would that also delete all the data in the clustered disk? And how would rollback work even if it doesn't delete all the data, would I need to re-create all the shares and their permissions?
I tried to google quite a bit on this but couldn't find anything on the decommission part of the clustered file shares.
Thanks!
I like games that are more complex, but this game is filled to the brim with way to many things to learn.
You have to think about piercing soft hard motorized infantry 6+ different air attacks and a bunch more. That is in addition to the clusterf*ck of a naval tree that as soon as you open it it's like a game within a game.
I mean each upgrade section and a unit has an entire page dedicated on a Wikipedia and actual tutorial videos for just a section of a game and 30+ minutes long... for just a basic understanding of it.
Not only that, but you have areas with horrible supply are a nightmare to play as due to modifiers such as Africa, Americas (especially South) and Asia (especially North).
You have to design templates for every situation, but you also have to upgrade and construct equipment to accommodate it and also have to have experience to create in the first place.
I have played a long time this game it honestly feels horrible to play unless you are a major nation, because god forbid you wanna play as Mexico against South America, or a Asian nation, without having to deal with horrible penalties, unless you build a specific template. It feels like there is no freedom.
Many of the trees feel overly complicated and can be cut up. Especially the naval tree.
I know it is a grand strategy and is supposed to be complex, but it feels way to overboard, when all you want to do is have a map game.
Does anyone else agree or does the community like it the way it is? Or perhaps do think it is not complex enough?
I dont understand why? Ik that the max u can build is 1 dyson sphere, so just let us choose 1 of the stars and build it there. i dont see the proglem. Or is it because the other stars are still close enough to collapse it?
Why is my OnePlus 9 system files taking over 50% of my storage space? It's just two months old. And the images and videos are less than 3 GBs.
This is just a rant for my own sanity. I'm more in need of emotional rather than tech support right now.
Why the hell is it so freaking difficult to transfer pictures and videos from your iPhone to your PC? It should be simple: you plug your phone in using the USB cable, your computer recognizes it as a hard drive and you can copy-paste the media files you want to save on your computer in one click. Easy peasy right?
Then why the f-ck do I need to use that awful Photos app on Windows and why is there an error popping up every single time? Why do I have to use iCloud, OneDrive or Dropbox to somehow make it easier? My computer and iPhone are 5 centimeters apart, my files shouldn't have to travel half the planet to get in here. Aren't guys at Apple aware that data centers and cloud storage gobble up so much electricity? Why are they doing this? Why? Why!? WHY!?
/rant
Currently we just use a series of folders and sub folders on our server but itβs becoming unwieldy. TIA.
Seriously. One scanner is 6 hours, all four scanners is 1 hour. So you deploy them all and just sit...for an entire 60 minutes, doing other things than the game. Or dick around and poke around in some caves out of curiosity for exotics. That's it.
So I sat here for an entire hour of not playing a game. To me that's deeply flawed game design, requiring you to just sit there and NOT play it. I get that the game wanted me to repair the shelters the antennas sit in from severe weather, but in the end that just ended up not really mattering. Build the four shelters for antennas, wait an entire hour.
I think it's time for me to step away from this game for a few months until changes are made. I just landed on a prospect and ...got right back in the pod and took off again because what's the point -- to unlock some more workshop items? Nah. I think I've seen enough and waited around / grinded enough. 130hours in, I've got a feel for it all.
I hope the devs see this and watch their declining player counts carefully.
One of my memes is now accessible through nft.gamestop.com:
https://ipfs.nft.gamestop.com/ipfs/QmcrveZNykBBb6aDHZfmwFkYbmVmH1KDaTSDjFyxfmjK4D
here is the same link, but with the regular IPFS gateway:
https://ipfs.io/ipfs/QmcrveZNykBBb6aDHZfmwFkYbmVmH1KDaTSDjFyxfmjK4D
how does it work?
> The InterPlanetary File System (IPFS) is a protocol and peer-to-peer network for storing and sharing data in a distributed file system. IPFS uses content-addressing to uniquely identify each file in a global namespace connecting all computing devices.[4]
https://en.wikipedia.org/wiki/InterPlanetary_File_System
there have been attempts to post such files with the nft.gamestop.com url and pass them off as evidence for NFTs with a legitimate connection to GameStop. that's extremely sneaky, there's nft.gamestop.com in the URL, but in reality anyone can upload to IPFS and create those links.
one of these posts is here: >!/Superstonk/comments/rwibj9/loophead_listed_on_nftgamestopcom_bullish/hrbyinf/!< or as a screenshot https://i.imgur.com/PM5Wmme.png
that misleading link to the json file must've been created by the uploader of the file, because you need to know the hash in order to access it: /ipfs/comments/8rsm2a/how_can_i_browse_ipfs/
> How can I "browse" IPFS?
> You cannot. You need the file hash in order to find the file.
so.. anyone can put files on the IPFS, it's a p2p protocol and ipfs.nft.gamestop.com is one of many gateways. just a heads up to be on the lookout for this, I didn't know about IPFS until today and these fake links can be very misleading. especially NFTs with a fake connection to GameStop.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.