We were promised Strong AI, but instead we got metadata analysis calpaterson.com/metadata.…
πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/calp
πŸ“…︎ Apr 30 2021
🚨︎ report
Ryan Cohen's tweet an hour ago was fairly inconspicuous so I decided to put the raw image into a webpage to examine its metadata and see if there were any inconsistencies. You won't believe what I found. πŸ΅πŸŒπŸš€πŸ’ŽπŸ™Œ IM FUCKING JACKED v.redd.it/u6c0kut42ty61
πŸ‘︎ 1k
πŸ’¬︎
πŸ‘€︎ u/EndlessCookies
πŸ“…︎ May 13 2021
🚨︎ report
Can someone tell what happened in this transaction? The Metadata looks very suspicious. explorer.cardano.org/en/t…
πŸ‘︎ 291
πŸ’¬︎
πŸ‘€︎ u/wutzebaer
πŸ“…︎ Apr 16 2021
🚨︎ report
ytmdl Web - A webapp that lets you download music by getting the audio from YouTube and metadata from various sources like Itunes, Last.FM, Gaana and others. v2 released with lots of fixes. v.redd.it/fcblbkouyrj61
πŸ‘︎ 4k
πŸ’¬︎
πŸ‘€︎ u/Droider412
πŸ“…︎ Feb 26 2021
🚨︎ report
LLVM: Fix noalias metadata handling for instructions simplified during cloning reviews.llvm.org/D102110
πŸ‘︎ 247
πŸ’¬︎
πŸ‘€︎ u/as-com
πŸ“…︎ May 08 2021
🚨︎ report
Accelerometer in camera metadata

Being that most modern cameras these days have an accelerometer (the auto horizon leveling you see on the camera) Why is that information not provided in the RAW metadata so that image editing programs like Capture One and/or Lightroom can give you the option to use that data to auto-level your images instead of using their less-than-reliable algorithm that doesn't seem to work when there are too many lines in the shot.

Given that GPS data can be recorded in, I dont see why they dont record accelerometer data.

πŸ‘︎ 263
πŸ’¬︎
πŸ‘€︎ u/JustACanadianBoi
πŸ“…︎ Apr 18 2021
🚨︎ report
Found a small but sweet easter egg when browsing video metadata, GB4ever <3
πŸ‘︎ 74
πŸ’¬︎
πŸ‘€︎ u/Foritus
πŸ“…︎ May 10 2021
🚨︎ report
Wish that we could edit metadata for tracks saved to library on Spotify like Apple Music
πŸ‘︎ 164
πŸ’¬︎
πŸ‘€︎ u/the_k_nine_2
πŸ“…︎ Apr 30 2021
🚨︎ report
Export a spreadsheet with metadata from Plex Library. Why is this hard?

I have been looking to do this off and on for years now. There used to be plugins, from what I read. I see a lot of people asking this question and no one giving a usable modern answer. The information is in my server. I just want my server to puke out a spreadsheet with basic information from my libraries. Matched name, year released, file name, etc.

Anyone? Thanks in advance

πŸ‘︎ 105
πŸ’¬︎
πŸ‘€︎ u/cainram
πŸ“…︎ Apr 28 2021
🚨︎ report
New addition to my file server. QNAP quad NVMe to PCIe adapter. Two Intel Optane M10 drives (mirrored) for ZIL and two Intel 760ps for (mirrored) for ZFS metadata and small block storage.
πŸ‘︎ 214
πŸ’¬︎
πŸ‘€︎ u/jllauser
πŸ“…︎ Apr 02 2021
🚨︎ report
NFT Metadata Standard

Hey, I proposed some time ago an NFT Metadata Standard on the Cardano Forum: https://forum.cardano.org/t/cip-nft-metadata-standard/45687

NFTs are slowly coming to Cardano, but there are projects that do not follow this idea or don't get what this is about. Maybe I can get a bigger reach posting it on reddit.

First of all metadata which are attached to a transaction, need a top-level key or also called a label. Looking at this CIP https://github.com/cardano-foundation/CIPs/blob/master/CIP-0010/CIP-0010.md, we see that there are reserved labels (0-15 and 65536 - 131071). Specifically you should avoid using label 0 and 1 for your specific metadata standard.
Unfortunately some of these NFT projects are using the label 1 for the NFT metadata.
I'm proposing to use the 721 label. It's free to use and is already implemented in some of the NFT projects.

Secondly Cardano allows to mint/send multiple tokens in a single transaction. To adapt the metadata and make use of this feature. I propose the following structure:

{
  "721": {
    "cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b": {
      "nft0": {
        "name": "NFT 0",
        "image": "ipfs://ipfs/&lt;IPFS_HASH&gt;",
        &lt;other properties&gt;
      },
      "nft1": {
        "name": "NFT 1",
        "image": "ipfs://ipfs/&lt;IPFS_HASH&gt;",
        &lt;other properties&gt;
      }
      ...
    }
  }
}

This model allows to mint either one token or multiple tokens with also different policies in a single transaction. A third party tool can then fetch the token metadata seamlessly. It doesn't matter if the metadata includes just one token or multiple. The proceedure for the the third party is always the same:

  1. Lookup the 721 key
  2. Lookup the Policy Id of the token
  3. Lookup the the Asset name of the token
  4. You end up with the correct metadata for the token

Example:

We take the metadata from above and want to lookup the metadata for the token: cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b.nft0

  1. Lookup the 721 key:

&#8203;

{"cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b": {
      "nft0": {
        "name": "NFT 0",
        "image": "ipfs://ipfs/&lt;IPFS_HASH&gt;",
        &lt;other properties&gt;
      },
      "
... keep reading on reddit ➑

πŸ‘︎ 100
πŸ’¬︎
πŸ‘€︎ u/alessandro_konrad
πŸ“…︎ Apr 05 2021
🚨︎ report
[OC] Machining Mount Hood Airborne LiDAR + embedded NFC metadata/interactivity v.redd.it/mudnr1jtmcv61
πŸ‘︎ 344
πŸ’¬︎
πŸ‘€︎ u/domriccobene
πŸ“…︎ Apr 25 2021
🚨︎ report
Don't Understand Metadata Hash? Learn Here!

https://www.youtube.com/watch?v=S5wI1s4Kaf4

Please watch this video if you're confused about metadata hash and it's uses.
I know this is about SHA-256 which does not fit in the coded parameters, but you can turn your SHA-256 into MD5 and make it fit into 32 bits.

Please try to use these tools to their full extent. I screenshot my encrypted metadata and save them with my original artworks. If authenticity ever came into question - I'm the only person who could tell you what the metadata hash says since it would be impossible to decrypt. I can therefore verify my file is not only timestamped on the chain first - but it has uniquely identifiable hash.

Link to SHA-256 generator

https://passwordsgenerator.net/sha256-hash-generator/
If anyone has further questions plz feel free to comment, I'm sure other people have similar questions. Thanks all <3

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/StonedFund
πŸ“…︎ May 08 2021
🚨︎ report
Metadata Matrix
πŸ‘︎ 54
πŸ’¬︎
πŸ‘€︎ u/bascurtiz
πŸ“…︎ May 03 2021
🚨︎ report
Caught student lying from... the image metadata(!) of the "evidence."

My class had a paper due Friday. It's a pretty brief 10-page, double-spaced analysis essay. Read the source material, give me your analysis type deal. The assignment was posted on the course site around mid-March, so the students had plenty of time and received several reminders. I discussed the paper in class many times and set up several office hours meetings - only 2 students (out of 30) showed up.

Thursday night, around 11:45pm, I received an email from a student. They claimed that they dropped their computer and cracked the screen. The computer doesn't work now, and therefore, they won't be able to complete the assignment on time. They also provided multiple images and videos as "evidence." One of the images was an Apple service receipt itemizing the damage, diagnosis, repairs, cost etc. This whole production seemed a bit suspicious to me. What sent my bullshit alarm off was a particular piece of information on the receipt. It said the laptop was purchased in Dec. 2017 BUT it was still in warranty. Hmm... As a Mac user myself, I know that Mac's have a one year warranty from the date of purchase. So, I downloaded the image to my computer, right click, get info, and voila. The image was taken in Oct. 2018. Gotcha!

I confronted the student with this information, and reminded them of our university's student conduct policy, and they immediately folded. The end of semester stress go to them, they panicked and didn't know what to do, bla bla bla. I decided not to file a report with the university but the student also didn't get the extension they requested. The essay they submitted is a hot pile of garbage and reads like it was written in 15 minutes. So, I decided a poor grade on this assignment (40% of the final grade) is enough punishment.

UPDATE: Thank you all for your advice. I contacted the student's home department to let them know about the student's behavior in case they have pulled the same deception tactics before or continue to do so in the future.

πŸ‘︎ 89
πŸ’¬︎
πŸ“…︎ Apr 18 2021
🚨︎ report
The final word about btrfs balance metadata

Every thread regarding btrfs maintenance point to this script:

https://github.com/kdave/btrfsmaintenance

More specifically:

https://github.com/kdave/btrfsmaintenance/blob/master/btrfs-balance.sh

It seems some people blindly trust it because it is from one of the btrfs developers/maintainers. But when it comes to balance I notice it will weekly balance metadata, while according to the btrfs mailing list you should not balance metadata at all, at least not as part of maintenance, regular task etc. Only for certain specific usecases, that mostly involve a filesystem spanning multiple disks, migration etc.

And you should definitely not use musage=0! But that is what would happen weekly if you run the maintenance script with defaults.

For reference, follow this thread and its links from this point: https://www.reddit.com/r/btrfs/comments/cnjdxb/advice_on_running_a_balance/ewbwank?utm_source=share&utm_medium=web2x&context=3

So now I am setting up my homeserver with 2 SSDs and 2 HDDs, one of the HDDs is for backups, it is mounted/unmounted once per night and btrbk does its thing.

Looking at the metadata of one of my SSDs:

sudo btrfs filesystem df /mnt/disks/data0

Data, single: total=599.01GiB, used=596.46GiB

System, single: total=32.00MiB, used=96.00KiB

Metadata, single: total=2.00GiB, used=783.30MiB

GlobalReserve, single: total=512.00MiB, used=0.00B

When I ran as a test -musage=20: Done, had to relocate 2 out of 603 chunks

Metadata, single: total=1.00GiB, used=783.33MiB

So total metadata shrank in half. But was that necessary ??

After reading lots of topics, this is my conclusion, running balance MONTHLY, DUSAGE=20, NO MUSAGE and in 2 runs in case there is not enough space for dusage 20%:

run_task btrfs balance start -v -dusage=10 /dev/DISK1

run_task btrfs balance start -v -dusage=20 /dev/DISK1

A monthly SCRUB is performed on each disk first, before this runs.

Does this make sense?

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/zilexa
πŸ“…︎ Apr 30 2021
🚨︎ report
Plex Music Metadata not pulling up?

Hey everyone, I've been pulling my hair regarding this issue with my music library for a couple of weeks now. For context, I've been running a Plex server on my ubuntu machine for years until the PC died. Created a new server on a windows machine and all my movies and tv shows show up exactly as they were in the previous server. However I noticed that my music was missing genres, styles, mood, etc metadata. I made sure scan agents all pointed towards the updated Plex scan and have tried several metadata refreshes to no avail.

My music is organized in the structure Plex recommends and I have even rescanned my library using MusicBrainz Picard to see if that would help. The only way I would see genre metadata pop up on Plex is if I change the library setting to use local metadata however that doesn't solve the issue of mood, artist bio, artist/song recommendation, and other features I grew fond of using.

Has anyone else experienced this before and how do I fix it?

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/tornshorts
πŸ“…︎ May 13 2021
🚨︎ report
Pix gets a metadata and EXIF information dialog. Better editing view and bug fixes. More on the Maui weekly report.
πŸ‘︎ 159
πŸ’¬︎
πŸ‘€︎ u/milohr
πŸ“…︎ Apr 23 2021
🚨︎ report
Metadata and subject distance in night photos

So I was looking through some of the metadata that's in the photos and I noticed something kind of interesting. It's not in all the photos, but some of them show the subject distance in the metadata.

For example it says that the Kris' hair was taken from a distance of 140mm (6 inches for you American people), and that most of the large bright objects were 960mm (3 feet) away from the camera.

I don't know how accurate this camera is for detecting distance but I'd guess that 6 inches would be about right for the hair photo with a wide angle (4mm), which would suggest that it would surely know if these bright spots were a finger on the lens, or something further away.

https://preview.redd.it/4l1nvdudc6y61.png?width=2450&format=png&auto=webp&s=be0545337e2bc2cfcce7ecc95ebd54b4934de2c7

But unfortunately the metadata suggests that pretty much all the other objects are also at either 940mm or 960mm from the camera, which casts a bit of doubt on the accuracy. I don't know how this model of camera detects distance, whether it uses infrared, or whether it just estimates distance, it might just have a few "set distances" to aid with focus, or the power of the flash.

I just thought I'd mention this since I haven't heard anybody else talking about it. I haven't looked at all the metadata for all the photos, it seems to be randomly missing from some of them, but perhaps someone else will find something interesting in the other photos..

πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/gijoe50000
πŸ“…︎ May 09 2021
🚨︎ report
I mastered the live performance of OK Human, separate audio files with metadata in the comments youtu.be/qzG5DL2V9ZM
πŸ‘︎ 55
πŸ’¬︎
πŸ‘€︎ u/NarrowElf
πŸ“…︎ Apr 24 2021
🚨︎ report
Cast Metadata repeating itself after only a few names reddit.com/gallery/mexcln
πŸ‘︎ 177
πŸ’¬︎
πŸ‘€︎ u/thebyronq
πŸ“…︎ Mar 28 2021
🚨︎ report
We were promised Strong AI, but instead we got metadata analysis calpaterson.com/metadata.…
πŸ‘︎ 101
πŸ’¬︎
πŸ‘€︎ u/calp
πŸ“…︎ Apr 27 2021
🚨︎ report
Which anime metadata DB is everyone using?

The title question.

Out of the default DBs and what's available in the default plugins repositories, what DB is everyone using for their Anime collection? Using TMDB and it's okay like 90% of the time, and I manually add the missing info on the remaining 10%, wondering if that's the best option or if someone else is closer to 100%.

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/sCeege
πŸ“…︎ May 02 2021
🚨︎ report
A project to save nearly every Touhou music uploader's video metadata

As always, link before description.

GitHub Repo: https://github.com/e666666/TouhouSongDatabase

Link to download the whole program: https://github.com/e666666/TouhouSongDatabase/archive/refs/heads/main.zip

Link to download just the database: https://github.com/e666666/TouhouSongDatabase/raw/main/videos.json

First of all, no, I'm not saving the video itself, just the metadata. What's metadata, you ask? It's just the fancy way of saying that info in the video description, such as who sang it, the arrangement, or the original Touhou song name.

And why would you save it? Two reasons.

  1. Sometimes I would like to find similar songs, and with all the individual channels' videos in one place, I can simply say "Give me Ayo songs!" and I get a list with 33 of them (although over half of them will be unavailable thanks to Alice's termination, which brings us to reason two)
  2. We all know how Alice got terminated recently, and with that, 140 out of 597 songs of mine went deleted. This would have been catastrophic if I didn't start this project earlier since I started this as a school project aiming at just Alice's channel, which is now gone. But with the database in place, I can use it to figure out the song name with the video id. I can also find other important info, such as who made the illustration for that video, which will be hard to find by simply googling the video id.

Enough gibberish, so how do you use it? To use the actual program, you will need Python 3 installed. After that, you have two ways of downloading the program, either with the main.zip link above or use Git to clone it. While using Git means another program to install, it simplifies updating later on since you can just "git pull" instead of redownloading it again.

Now, double-click Start.bat or Start.sh depending on your system, and hopefully, it will just work. There are two things that you might ever use, the first and third options.

The first allows you to search a specific property in the database, which is again a fancy of saying you can find what songs have something shared. In the second scenario above, I use it to find videos with the song title on other channels. While searching on YouTube might be faster, I find this method neater.

One thing is that you might be overwhelmed by all the

... keep reading on reddit ➑

πŸ‘︎ 79
πŸ’¬︎
πŸ‘€︎ u/e666666
πŸ“…︎ Apr 11 2021
🚨︎ report
[OC] Animated Heatmap of Parler Video GPS Metadata in DC on January 6th 2021 v.redd.it/qkkqj6nl0ya61
πŸ‘︎ 7k
πŸ’¬︎
πŸ‘€︎ u/_Xeet_
πŸ“…︎ Jan 12 2021
🚨︎ report
Where in the World is Q? Clues from Image Metadata bellingcat.com/news/rest-…
πŸ‘︎ 44
πŸ’¬︎
πŸ‘€︎ u/thegreatblazed
πŸ“…︎ May 10 2021
🚨︎ report
Feature request: add a metadata remover to ProtonDrive client apps

So far I've been excited with the progress that ProtonDrive has made and I'm liking the security model so far. I have a suggestion tho, can you add an option to remove the metadata of files in the application client itself before it gets encrypted in the client and uploaded in the ProtonDrive servers? Something like exiftools or something?

Edit: https://protonmail.uservoice.com/forums/284483-protonmail/suggestions/43341975-add-a-metadata-remover-to-protondrive-client-apps

πŸ‘︎ 109
πŸ’¬︎
πŸ‘€︎ u/hushrom
πŸ“…︎ May 03 2021
🚨︎ report
Switched to Linuxserver/jellyfin docker image and almost no metadata is being downloaded.

It's been 2 days. This same library worked without issue with the official jellyfin docker image but based on comments I read here I decided to switch.

I started looking over the logs but I'm not seeing anything that jumps out. I'm going to blow away the library and start over but I was wondering if anyone running the latest linuxserver/jellyfin docker image has run into this issue and what steps might be good to take to troubleshoot.

This same library had no significant issues with metadata previously so I don't believe it's a file/directory naming convention issue.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Smile_lifeisgood
πŸ“…︎ May 07 2021
🚨︎ report
Metadata management with Amundsen: any easy-to-understand tutorials available?

I'm trying to utilize Amundsen for the metadata management in our shop. I managed to install it in an Ubuntu VM, but am stuck with making it work with my databases. Are there any easy-to-understand tutorials available for this tool?

It seems to me that maintaining Amundsen alone requires dedicated stuff, so would it be possible to suggest some other open-source alternatives that have similar functionalities?

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/AMGraduate564
πŸ“…︎ May 02 2021
🚨︎ report
What's the naming convention for media to get metadata and descriptions? (this is a straight makemvk from the dvd iso)
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/AweSmAsh
πŸ“…︎ May 13 2021
🚨︎ report
Developing an r/RomanceBooks Fantasy romance/PNR recommendation Database to accompany the Sci-Fi Romance Database. What metadata would you want? (Note: Sci-fi database is live! See post for more details!)

The BETA for the Fantasy and Paranormal Romance Database is now LIVE! Info at this post! Thank you everyone for your amazing suggestions on this post! It made making the BETA really easy!

I manage the r/RomanceBooks sci-fi recommendation database, and I’m in the process of developing an additional database for fantasy and PNR recommendations! Info on the sci-fi database at this post. The idea is that you can filter through a variety of metadata to sort through books recommended by r/RomanceBooks readers. I think the fantasy database will need to be separate because each genre has some unique things that don’t apply to the other genre.

I’d love to hear your thoughts on what metadata you’d like to be filterable on within a fantasy romance/PNR database. Both the sci-fi database and the upcoming fantasy databases are meant to be community resources, so I’m open to any help or feedback you’d like to give!

Without further ado, here are my initial thoughts on metadata for a fantasy romance/PNR database. (Edit: Now with additions based on comments! Thank you!!! These suggestions in the comments are amazing!!)

  • Search All Keywords and Blurb Simultaneously (Single word / phrase)
  • GoodReads Link (or other URL if not on goodreads)
  • Series Name
  • First Book Name
  • Author
  • Official Description (for book, not series)
  • Trope (Arranged Marriage, stuck together, unlikely allies, allies to lovers, friends to lovers, enemies to lovers, fill in blank)
  • Type of Encounter (political - royalty, political - inter species politics, school/educational institution, kidnapping, call of destiny, Hero rescuer, heroine rescuer, couple rescues each other, fill in blank)
  • Hero Type (alpha, atypical alpha, antihero. grumpy, cinnamon roll, normal dude, warrior, fill-in blank)
  • Fated Mates? (yes/no)
  • young adult, new adult, or adult?
  • Other Tropes (List as many as you like!)
  • Shifters (werewolves etc) (yes/no)
  • Vampires (yes/no)
  • Dragons (yes/no)
  • Fae (yes/no)
  • Angels/demons (angels only, demons only, both, neither, other notes: fill in blank)
  • Wizards/Witches (yes/no)
  • Other types of magical beings (Gods, gargoyles, mermaids/sirens, necromancers/mediums, reapers, fill in blank)
  • Fairy Tale (no, present in universe - like shrek, if retelling of fairy tale, typ
... keep reading on reddit ➑

πŸ‘︎ 77
πŸ’¬︎
πŸ‘€︎ u/whtnymllr
πŸ“…︎ Mar 31 2021
🚨︎ report
LPT for stackers: remove the Metadata on your pictures before posting.

Friend of a friend was just relieved of 2 monster boxes worth.

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/kitten0077
πŸ“…︎ Apr 20 2021
🚨︎ report
Metadata for Repo failed

I'm trying to update and I'm getting this error:

Errors during downloading metadata for repository 'updates':

- Status code: 404 for http://fedora.zero.com.ar/linux/updates/34/Everything/x86_64/repodata/repomd.xml (IP: 190.111.255.148)

- Downloading successful, but checksum doesn't match. Calculated: 19681c.................

Error: Failed to download metadata for repo 'updates': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/ziveRUN
πŸ“…︎ Apr 24 2021
🚨︎ report
Adding Crop Ratio info to metadata (and output naming) dozer-labs.com/blog/namin…
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/poozoodle
πŸ“…︎ May 11 2021
🚨︎ report
Accelerometer produce a lot of metadata.
πŸ‘︎ 31
πŸ’¬︎
πŸ‘€︎ u/blackrosae
πŸ“…︎ May 06 2021
🚨︎ report
LPT: Delete EXIF/Metadata from photos before posting them online

EXIF (Exchangeable Image File Format) data is just a subset of data attached to images that could include things such as location, date, time, make/model of device, etc. Even if you have nothing to hide, there is no reason to leave data that can reveal personal info attached to a picture for any person/government/company to see.

Plus, it's very simple to delete.

For example on windows you just right click the image>properties>details>remove properties and person information>create a copy with all possible properties removed

Easy as that! There is also software you can download that will bulk delete metadata from multiple photos if you have a bunch you want to scrub and don't want to click through the process one at a time. I believe you can even get phone apps that will delete the data if you don't want to download the pictures to a computer.

With privacy becoming less and less common as surveillance and mass data collection become the new norm, I think deleting metadata from photos is just a quick and easy way to take some of your online freedom back.

πŸ‘︎ 265
πŸ’¬︎
πŸ‘€︎ u/motorcyclemom69
πŸ“…︎ Mar 24 2021
🚨︎ report
How do you collect and manage ebook and comic metadata?

I just got web browser based reader up and running for my ebooks and I've come to realise that my metadata for my books is quite messed up. I'm trying to figure out a good way to sort them all out, including number books in a series. I figured I'd pick the selfhosted community's brains on the matter.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/IntoYourBrain
πŸ“…︎ May 12 2021
🚨︎ report
The Goodreads metadata collection (retired) and 51 million Amazon book reviews.

The Goodreads API was retired on December 8th 2020. Mengting Wan from UCSD graciously scraped it before its demise. There may be other datasets besides this.

Jianmo Ni, also from UCSD, also scraped Amazon's reviews in 2019. Since these book metadata collections go together in their utility I've shared them as one.

Enjoy, readers, and please download for safe-keeping. APIs are closing, and precious publicly crowd-sourced meta information is disappearing from the net as more companies privatize their data and user contributions.

University of California, San Diego's Book Graph Dataset

Description: We collected three groups of datasets: (1) meta-data of the books, (2) user-book interactions (users' public shelves) and (3) users' detailed book reviews. These datasets can be merged together by matching book/user/review ids.

Basic Statistics of the Complete Book Graph:

  • 2,360,655 books (1,521,962 works, 400,390 book series, 829,529 authors)
  • 876,145 users; 228,648,342 user-book interactions in users' shelves (include 112,131,203 reads and 104,551,549 ratings)

URL: https://sites.google.com/eng.ucsd.edu/ucsdbookgraph/home ; https://github.com/MengtingWan/goodreads

Goodreads Book Datasets With User Rating 10M

Description: Approximately 10,000,000 books are available on the site's archives, and these datasets are collecting from them. for requesting on the API, we used Goodreads python library,

URL: https://www.kaggle.com/bahramjannesarr/goodreads-book-datasets-10m

Amazon Review Data

51,311,621 book reviews.

URL: http://deepyeti.ucsd.edu/jianmo/amazon/

Direct links: http://deepyeti.ucsd.edu/jianmo/amazon/categoryFiles/Books.json.gz ; http://deepyeti.ucsd.edu/jianmo/amazon/metaFiles/meta_Books.json.gz

πŸ‘︎ 65
πŸ’¬︎
πŸ‘€︎ u/shrine
πŸ“…︎ Apr 24 2021
🚨︎ report
LPT: If you have a DSLR camera, take note of the serial number in case it is stolen. Many cameras include the serial number in the metadata of the photos it takes. Tools exist to trace any photos taken with your camera once they are posted online.

One such service is CameraTrace. Under their FAQ is a partial list of supported camera models. Currently they support tracing on about 5 popular photo hosting sites including twitter.

Other similar services probably exist or are likely to in the future. As well as the expansion of the range of sites they can search from.

πŸ‘︎ 29k
πŸ’¬︎
πŸ‘€︎ u/free-dadjokes
πŸ“…︎ Dec 15 2020
🚨︎ report
Google showing image metadata as site links in search results

Hi all - I'm wondering if anyone else has dealt with a problem like this. I do digital marketing for a healthcare company with around 30 locations in Connecticut. In the past few months, I've noticed that, if you search for a clinic location (ex: "company + town"), the results appear with site links/sublinks that are for images on the location's landing page. Google seems to be pulling the metadata for those images and linking to them - and unfortunately those image links really aren't part of the site navigation, they're just media uploaded into the Wordpress backend, which is confusing to site visitors.

To stop this from happening, I added more structure to the site and reindexed, but it's been a month since doing that and the results are still off. Granted, for SEO purposes I made sure to name every image after the location and include the location name in the image description, so it seems like the baseline solution for this would be to remove the clinic location name from that metadata, but I'm certain that will negatively impact our SEO in other ways.

Any ideas on how to address this problem?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Dynexnumlock
πŸ“…︎ May 13 2021
🚨︎ report
What metadata is most important? My library looks crazy

I have artwork on the wrong books, random unnamed files, total chaos for my library. Where can I get a guide to label my files, edit meta data or merge together files for a clean looking library. Is there anything like Filebot but for Audiobooks?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/mastererrob
πŸ“…︎ May 10 2021
🚨︎ report
how migrate cache and metadata folder ?

Hi everyone,

I have a little issue on jellyfin. My main hard drive (C:\) contains the folders for cache and metadata such as :

C:\ProgramData\Jellyfin\Server\cache

C:\ProgramData\Jellyfin\Server\metadata

Issue is my Hard drive is being completely full.

That's why I would like to migrate these folders on an external hard drive (F:\) for example, but I cant find a way to make it work properly.

I changed the path on the jellyfin platform, I copied/pasted the folders on my external hard drive, but when I check jellyfin and I click on the profile of an actor/director for example, it simply doesnt load.

Any clue ?

Thanks a lot.

πŸ‘︎ 16
πŸ’¬︎
πŸ“…︎ Apr 29 2021
🚨︎ report
I have created a new metadata first approach to data infrastructure. And I also created Apache Hive and was a former Facebook Data Architect. AMA!

I have written about it in Towards Data Science.

Thanks for all the questions folks! I really enjoyed answering them. If you have any other questions about data infrastructure in general you can reach me at rsm at datacoral dot co.

πŸ‘︎ 201
πŸ’¬︎
πŸ‘€︎ u/raghumurthy
πŸ“…︎ Mar 12 2021
🚨︎ report
Algo Bloom (ID 204358648) a 2:32 mp4 video of an ACO simulation in Algorand Colours: 15 max supply. Metadata hash + no freeze/clawback.

10s in

I've been experimenting with compute shaders, ant colony optimisation and Algorand recently.I've minted 15 NFTs of a 2:32 video of the simulation - I think it's cool.

The NFTs have no clawback or freeze and include an MD5 hash of the file to verify authenticity (See my other posts on this).

Distribution:

I'm holding back 6, three of which are for u/StonedFund, u/Mattazz (thanks for getting the ball rolling) and u/Anon_pepperoni11 (as payment in kind) if they want one.

I'm giving away 8 to the first 8 to DM me. EDIT: Lucky 8 have messaged, sending now.

I'm Auctioning one. Auction ends in 24hrs so 15:00 UTC.Along with the NFT I'm happy to provide the source code and a brief explainer to the highest bidder. DM me/post your bids below.I'll aim to keep this updated with the current leader.

Current Best Bid: 3 Algo

Thanks!

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/-prj
πŸ“…︎ Apr 30 2021
🚨︎ report
Looking for a Ubuntu tool (GUI preferable) that removes MP4/MKV metadata

So, on windows you can remove a video (or multiple) by going into properties and clicking "remove metadata for this file" or whatever, and it'll remove the metadata attributed to it. To be clear, the name stays the same and isn't renamed, but the tags and metadata name is.

Reason I say is because I have videos I've added to Plex (that aren't apart of an agent) and have things like their release, dates, tags, authors and other such metadata on them. On windows, I knew how plus there were also programs if you didn't want to go thr right+click route. Have no idea how to on Linux (Ubuntu) though.

I would appreciate if there's a means to do it in bulk, through a GUI, since I have lots of files that would need this to be the case.

Any thoughts, anyone?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/prodigalkal7
πŸ“…︎ May 13 2021
🚨︎ report
Metadata question for multiples versions of an audiobook with different authors

I have encountered a bug in Plex. If you have multiple versions of an audiobook read by different people, each with different cover art, the album art is getting merged by Plex. Here is an example with two versions of the Harry Potter books, and the metadata I use:

Artist: J.K. Rowling
Album: Harry Potter [03] and the Prisoner of Azkaban
Composer [Narrator]: Stephen Fry
Sort Album: Harry Potter [03] and the Prisoner of Azkaban (Fry)

Artist: J.K. Rowling
Album: Harry Potter [03] and the Prisoner of Azkaban
Composer [Narrator]: Jim Dale
Sort Album: Harry Potter [03] and the Prisoner of Azkaban (Dale)

These two albums show up separately (rather than being merged), thanks to the β€˜Sort Album’ which differs for each. Unfortunately, the cover art for each album, which is different for each, is showing up the same for both.

Does any one know how to fix this? Is there a workaround?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/saltedlolly
πŸ“…︎ May 13 2021
🚨︎ report
Has anyone got Metadata hash working on algodesk?

I really want some unambigous way to link the token to the underlying data. So putting the hash there seems just right. SHA256 produces a 32B hash so I tried pasting the usual hex string - no luck - algodesk complains about it not being 32B.

After some experimentation I realised the site would only accept 32 characters (rather than the bytes the hex string represents). So I tried MD5, a 16B hash which makes for 32 characters. Algodesk accepts it but hangs on creating the asset.

Has anyone successfully filled the Metadata Hash field with algodesk?

My guess is that, behind the scenes, algodesk is converting the hex string to binary but the input filter is counting characters, not the binary they represent.

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/-prj
πŸ“…︎ Apr 30 2021
🚨︎ report
Convert M4B audiobooks to MP3 while retaining chapters/metadata and original folder structure

Found a lot of different apps but not a single one that does all of this properly.

OS - Windows 10

Can be paid or free, I don't mind.

Thanks

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/TheToxicBeast
πŸ“…︎ May 10 2021
🚨︎ report
How to get Sonar to rename TV episodes after metadata update?

So I'm watching Shark tank and it seems every week they come out with an episode, its simply listed as "Episode 12" or whatever episode it is.

Sonarr downloads it and names it:

Shark Tank - S12E12 - Episode 18 WEBDL-720p.mkv" for example.

Then a week later, the metadata is updated and the episode name changes to include the new name. For example:

Shark Tank - S12E12 - Rule Breaker, MountainFlow Eco-Wax, Yono Clip, NightCap (2009) - [720p.WEB.AVC.5.1 EAC3.KOGi]

If I check in Sonarr, I can see the name of the episode is also updated there, great! But if I goto the actual file, I see that the filename does not get updated. Its still just listed as Episode 12.

How do I get Sonarr to rename files on the computer after getting updated metadata for them?

πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/007craft
πŸ“…︎ Apr 26 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.