A list of puns related to "Merkle–Damgård construction"
So I know that the hash splits its message (M) into equal parts for the compression functions, so what is an easy way to split random bits evenly? (In Python 3)
I have read a lot of high level stuff about the sponge / squeeze function, but I don't understand how that is different than sha3's predecessors, after all they all sort of soak up data and then squeeze out a digest of it. how is sha3 / keccak truly different?
also, is the math of sha3 something that would have been difficult 20 years ago? I just want to know in laymans terms (or at least in undergrad terms), how is sha3 innovative other than the fact that it names its process of digesting data a "sponge"?
From what little I understand the merkle path is based on the "opposite" hashes in each transaction pair in the tree, which prove the inclusion of the "actual" hashes that descend down to a given transaction.
What I dont understand how we (an SPV node?) get access to these "opposing" hashes in the tree, working our way down through so many secure layers, if all we have access to is the root hash?
Edit: now I understand we aren't going from root to leaf, but leaf to root. It sounds like a full-node supplies the SPV node the "opposite" non-transaction hashes referenced above, but i find it difficult to understand how the bloom filter works into this
I‘m working on an NFT project and it would be cool if I could create a merkle-tree-based whitelist.
I read a document which state "Matrix messages are stored in per-conversation Merkle DAG data structures and conversations are replicated across all participating servers" but I can't find this on matrix site. Is there any link relate to this ?
If I were to download Ethereum's state tree, how big would it be? Also is this tree constantly growing?
Hey r/ipfs,
I am trying to recover the data in an instance in which we don't know what happened. The data is not super critical but we would like to be able to get it back. The last operation done was to increase the GC Watermark from 100GB to 150GB and when we restarted the server, it didn't come online.
Currently when I boot up the IPFS daemon,
Initializing daemon...
go-ipfs version: 0.10.0
Repo version: 11
System version: amd64/linux
Golang version: go1.16.8
These are the only logs that are printed and I cannot connect to API or any other endpoint. We had roughly 100GB of data, and the server is stuck at this state for more than 12 hours at this point.
If I try to run a command when the server is running, it shows me `Error: lock /root/.ipfs/repo.lock: someone else has the lock`, and I remember seeing "Merkle dag not found" error when the daemon is not running.
Should we keep waiting for a recovery? We have the data duplicated in a few locations right now, but I don't have any clue how to recover from such an error.
So I know that the hash splits its message (M) into equal parts for the compression functions, so what is an easy way to split random bits evenly (into equal chunks) (In Python 3)?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.