A list of puns related to "Block (data storage)"
I'm using messages.google.com on Firefox 88.01 and it keeps asking me for permission to "store data in persistent storage". (You can also see the pop-up at https://storage-quota.glitch.me/, by hitting "Request" in the persistent storage area.)
I can decline this request when it pops up and the website works fine, but the website will ask again on refresh.
I always want to decline this request, but in the pop-up there is no option to "always block", just "block" (temporarily) which is cleared on refresh. I've tried disabling persistent storage in about:config but that completely breaks the website (i.e. blank page on load), and I've tried adding the website to "Manage exceptions" in preferences but that also breaks the site.
Is there any way to disable the popup permanently without breaking the website? I don't want to be asked about persistent storage every time I access the website.
Now tell me again how a small block size is not a cult.
According to https://file.app/, it costs $0.0006390 USD to store 1 GiB on the Filecoin network. Does it really only cost $0.06 to store 100 GiB for a year? This seems absurdly cheap. The website also compares this to the AWS S3 Infrequent Access Tier, and finds that Filecoin is 25000% cheaper. What??
A post on this subreddit from last week had comments listing https://filstats.com/ and https://file.app/ with highly conflicting storage cost information. Now, it seems that both sites redirect to https://file.app/. Which was correct, and why the significant discrepancy?
If this is accurate, why is the demand for storing with Filecoin not significantly higher? What am I missing?
In the 21st century digital storage media takes up a significant amount of space. We know that by the 23rd century instead of using two dimensional storage discs stacked in racks which take up a lot of space we are using isolinear technolog.
Isolinear technology from what we can tell also requires a significant amount of physical storage. We know from TNG technical manuals that the computer is massive and some portion of that is probably taken up by storage arrays of many isolinear storage chips. However, isolinear technology does store considerably more data.
By the 32nd century we are using reprogrammable matter in a variety of ways. Obviously there are physical improvements to ships and moving floors, but I posit there is a much greater purpose for reprogrammable matter. This purpose is that you can write data to each molecule of matter and then reorganize that matter to access data significantly decreasing the physical storage needed for the massive data stored on starships.
I've been reading through research papers on blockchain which referenced iota via the iota Discord - Twitter papers section and stumbled upon this question below. I'm looking to gain further understanding on how iota manages and stores big data?
Handling Big data on the Blockchain:Β In the Blockchain network, every participant maintains a local copy of the complete distributed ledger. Upon the confirmation of a new Block, the Block is broadcast throughout the entire peer-to-peer network, and every node appends the confirmed Block to their local ledger. While this decentralised storage structure improves efficiency, solves the bottleneck problem and removes the need for third-party trust[21], the management of IoT data on the Blockchain puts a burden on participantsβ storage space. The study in [22] calculated that a Blockchain node would need approximately 730 GB of data storage per year if 1000 participants exchange a single 2 MB image per day in a Blockchain application. Therefore, the challenge is to address the increasing data storage requirements when Blockchain deals with IoT data.
So, in addition to not being able to take missions with commodity rewards if we have a full/no cargo hold, apparently we cannot hand the missions with data/material rewards if we have a full data/material storage.
Nice...
So, I know about dataslates, but how small can they be? Essentially, I had an idea for a Chapter whose marines carry around a pendant containing a data chip or something that they use to record their name and deeds, which can then be recovered after their death and used to memorialize them. Is this feasible?
I have no idea what is possible, I'm throwing this out there to serious engineers. Is it possible to distribute a node's data size across distributed storage and processing and bandwidth, can you effectively turn a node into a cloud virtual machine or docker that uses web 3.0. The bottleneck for security and scalability on cryptocurrencies is the lack of nodes, often due to block size, name bitcoin cash. This is arguably an issue with an node even on proof of stake. I'm talking, highly accessible web 3.0 interface that someone in africa can run a node on with a 3 year old samsung smart phone.
Inspired by the impending lift on the block ID limit, I wanted to think of ways of how block data could be saved efficiently without wasting disk space. The method I thought of is a variation of the palette mapping techniques used in everything from images to storing tile sets for video games.
Currently, all block ID's are 1 byte in size, and can be up to 1.5 bytes in size. This presents three problems:
I considered these three problems in my challenge, and thought of a method that solves all of them.
The concept is as follows:
First things first, the only part of the save format that needs to change is how subchunks are stored; in-fact, the only part of the subchunks format that needs to change is the "Blocks" tag.
In each subchunk, a palette map must be saved. The palette map I thought of works similar to the palette map used in saving Structure Block files: it stores all the unique blocks in the subchunk, and assigns each unique block a palette id.
As an example, consider a subchunk which has only 4 unique blocks: air, stone, dirt, and grass. Each block can be assigned an ID using the numbers 0 through 3, like so:
Block | paletteID |
---|---|
Air | 0 (00) |
Stone | 1 (01) |
Dirt | 2 (10) |
Grass | 3 (11) |
In the example palette map, I included the binary equivalents for each ID in parenthesis. Notice how each ID fits in exactly 2 bits or less.
When saving this subchunk, rather than save each palette ID as a full byte (8-bits), I instead save them all as 2 bit numbers. Doing so means I can fit 4 ID's into a single byte without waste.
Here is an example of what the stored data might look like. The top row of numbers show the index of each bit, spaces separate each palette ID, and vertical bars separate each byte:
... keep reading on reddit β‘And yes I have set min and max block to equal 2hr
If Elon came in and purchased somewhere around 30 percent of Doge, it would be feasible for him to be able to effectively "garantee" trasaction outcomes. Basically, Doge can implement a trust based system that would allow transactions to be instantly verified under the pretense that the 30 percent holder is guaranteeing the funds. This is essentially allowing for a delay in the transfer between a persons wallet and a store to perform a transaction via the block chain. After the normal 60 second transaction has timed out you would simply be performing a normal transaction. This is of course as long as the number of open transactions doesn't exceed the whales holdings and this is obviously a trust based system but you can gain trust points based on your transaction history which will increase your limits and gradually allow you to make faster and faster transactions as the whale becomes more and more willing to front them. This also means that the data being held by the chain doesn't have to be so centralized to perform faster trans. You would just need to keep track of current transactions and only one account ballance for a short period of time. This would also allow people that are willing to hold, to become effectively "short term lenders" (around 50 seconds) which will further increase incentives to hold and also the trust in the currency. Might be worth a thought or two! -Cory
I want to make an application that stores files on a decentralized network, and be able to prove when it was put on the network. I was looking at IPFS and Sia, but neither seem to have a way to prove when the files were put onto the network. What am I looking for? Or, can I do this with IPFS or Sia?
We are in Germany which is subject to GDPR. We have a contract with a 3rd party who store our data. It turns out the servers our data is stored on are in thr US. I need to identify what the law actually states in relation to this, any ideas?
I know we could ask for it to be moved to EU based servers but I need to identify what the law is.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.