A list of puns related to "Metadata"
Being that most modern cameras these days have an accelerometer (the auto horizon leveling you see on the camera) Why is that information not provided in the RAW metadata so that image editing programs like Capture One and/or Lightroom can give you the option to use that data to auto-level your images instead of using their less-than-reliable algorithm that doesn't seem to work when there are too many lines in the shot.
Given that GPS data can be recorded in, I dont see why they dont record accelerometer data.
I have been looking to do this off and on for years now. There used to be plugins, from what I read. I see a lot of people asking this question and no one giving a usable modern answer. The information is in my server. I just want my server to puke out a spreadsheet with basic information from my libraries. Matched name, year released, file name, etc.
Anyone? Thanks in advance
Hey, I proposed some time ago an NFT Metadata Standard on the Cardano Forum: https://forum.cardano.org/t/cip-nft-metadata-standard/45687
NFTs are slowly coming to Cardano, but there are projects that do not follow this idea or don't get what this is about. Maybe I can get a bigger reach posting it on reddit.
First of all metadata which are attached to a transaction, need a top-level key or also called a label. Looking at this CIP https://github.com/cardano-foundation/CIPs/blob/master/CIP-0010/CIP-0010.md, we see that there are reserved labels (0-15 and 65536 - 131071). Specifically you should avoid using label 0 and 1 for your specific metadata standard.
Unfortunately some of these NFT projects are using the label 1 for the NFT metadata.
I'm proposing to use the 721 label. It's free to use and is already implemented in some of the NFT projects.
Secondly Cardano allows to mint/send multiple tokens in a single transaction. To adapt the metadata and make use of this feature. I propose the following structure:
{
"721": {
"cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b": {
"nft0": {
"name": "NFT 0",
"image": "ipfs://ipfs/<IPFS_HASH>",
<other properties>
},
"nft1": {
"name": "NFT 1",
"image": "ipfs://ipfs/<IPFS_HASH>",
<other properties>
}
...
}
}
}
This model allows to mint either one token or multiple tokens with also different policies in a single transaction. A third party tool can then fetch the token metadata seamlessly. It doesn't matter if the metadata includes just one token or multiple. The proceedure for the the third party is always the same:
Example:
We take the metadata from above and want to lookup the metadata for the token: cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b.nft0
​
{"cbc34df5cb851e6fe5035a438d534ffffc87af012f3ff2d4db94288b": {
"nft0": {
"name": "NFT 0",
"image": "ipfs://ipfs/<IPFS_HASH>",
<other properties>
},
"
... keep reading on reddit β‘https://www.youtube.com/watch?v=S5wI1s4Kaf4
Please watch this video if you're confused about metadata hash and it's uses.
I know this is about SHA-256 which does not fit in the coded parameters, but you can turn your SHA-256 into MD5 and make it fit into 32 bits.
Please try to use these tools to their full extent. I screenshot my encrypted metadata and save them with my original artworks. If authenticity ever came into question - I'm the only person who could tell you what the metadata hash says since it would be impossible to decrypt. I can therefore verify my file is not only timestamped on the chain first - but it has uniquely identifiable hash.
Link to SHA-256 generator
https://passwordsgenerator.net/sha256-hash-generator/
If anyone has further questions plz feel free to comment, I'm sure other people have similar questions. Thanks all <3
My class had a paper due Friday. It's a pretty brief 10-page, double-spaced analysis essay. Read the source material, give me your analysis type deal. The assignment was posted on the course site around mid-March, so the students had plenty of time and received several reminders. I discussed the paper in class many times and set up several office hours meetings - only 2 students (out of 30) showed up.
Thursday night, around 11:45pm, I received an email from a student. They claimed that they dropped their computer and cracked the screen. The computer doesn't work now, and therefore, they won't be able to complete the assignment on time. They also provided multiple images and videos as "evidence." One of the images was an Apple service receipt itemizing the damage, diagnosis, repairs, cost etc. This whole production seemed a bit suspicious to me. What sent my bullshit alarm off was a particular piece of information on the receipt. It said the laptop was purchased in Dec. 2017 BUT it was still in warranty. Hmm... As a Mac user myself, I know that Mac's have a one year warranty from the date of purchase. So, I downloaded the image to my computer, right click, get info, and voila. The image was taken in Oct. 2018. Gotcha!
I confronted the student with this information, and reminded them of our university's student conduct policy, and they immediately folded. The end of semester stress go to them, they panicked and didn't know what to do, bla bla bla. I decided not to file a report with the university but the student also didn't get the extension they requested. The essay they submitted is a hot pile of garbage and reads like it was written in 15 minutes. So, I decided a poor grade on this assignment (40% of the final grade) is enough punishment.
UPDATE: Thank you all for your advice. I contacted the student's home department to let them know about the student's behavior in case they have pulled the same deception tactics before or continue to do so in the future.
Every thread regarding btrfs maintenance point to this script:
https://github.com/kdave/btrfsmaintenance
More specifically:
https://github.com/kdave/btrfsmaintenance/blob/master/btrfs-balance.sh
It seems some people blindly trust it because it is from one of the btrfs developers/maintainers. But when it comes to balance I notice it will weekly balance metadata, while according to the btrfs mailing list you should not balance metadata at all, at least not as part of maintenance, regular task etc. Only for certain specific usecases, that mostly involve a filesystem spanning multiple disks, migration etc.
And you should definitely not use musage=0
! But that is what would happen weekly if you run the maintenance script with defaults.
For reference, follow this thread and its links from this point: https://www.reddit.com/r/btrfs/comments/cnjdxb/advice_on_running_a_balance/ewbwank?utm_source=share&utm_medium=web2x&context=3
So now I am setting up my homeserver with 2 SSDs and 2 HDDs, one of the HDDs is for backups, it is mounted/unmounted once per night and btrbk does its thing.
Looking at the metadata of one of my SSDs:
sudo btrfs filesystem df /mnt/disks/data0
Data, single: total=599.01GiB, used=596.46GiB
System, single: total=32.00MiB, used=96.00KiB
Metadata, single: total=2.00GiB, used=783.30MiB
GlobalReserve, single: total=512.00MiB, used=0.00B
When I ran as a test -musage=20: Done, had to relocate 2 out of 603 chunks
Metadata, single: total=1.00GiB, used=783.33MiB
So total metadata shrank in half. But was that necessary ??
After reading lots of topics, this is my conclusion, running balance MONTHLY, DUSAGE=20, NO MUSAGE and in 2 runs in case there is not enough space for dusage 20%:
run_task btrfs balance start -v -dusage=10 /dev/DISK1
run_task btrfs balance start -v -dusage=20 /dev/DISK1
A monthly SCRUB is performed on each disk first, before this runs.
Does this make sense?
Hey everyone, I've been pulling my hair regarding this issue with my music library for a couple of weeks now. For context, I've been running a Plex server on my ubuntu machine for years until the PC died. Created a new server on a windows machine and all my movies and tv shows show up exactly as they were in the previous server. However I noticed that my music was missing genres, styles, mood, etc metadata. I made sure scan agents all pointed towards the updated Plex scan and have tried several metadata refreshes to no avail.
My music is organized in the structure Plex recommends and I have even rescanned my library using MusicBrainz Picard to see if that would help. The only way I would see genre metadata pop up on Plex is if I change the library setting to use local metadata however that doesn't solve the issue of mood, artist bio, artist/song recommendation, and other features I grew fond of using.
Has anyone else experienced this before and how do I fix it?
So I was looking through some of the metadata that's in the photos and I noticed something kind of interesting. It's not in all the photos, but some of them show the subject distance in the metadata.
For example it says that the Kris' hair was taken from a distance of 140mm (6 inches for you American people), and that most of the large bright objects were 960mm (3 feet) away from the camera.
I don't know how accurate this camera is for detecting distance but I'd guess that 6 inches would be about right for the hair photo with a wide angle (4mm), which would suggest that it would surely know if these bright spots were a finger on the lens, or something further away.
https://preview.redd.it/4l1nvdudc6y61.png?width=2450&format=png&auto=webp&s=be0545337e2bc2cfcce7ecc95ebd54b4934de2c7
But unfortunately the metadata suggests that pretty much all the other objects are also at either 940mm or 960mm from the camera, which casts a bit of doubt on the accuracy. I don't know how this model of camera detects distance, whether it uses infrared, or whether it just estimates distance, it might just have a few "set distances" to aid with focus, or the power of the flash.
I just thought I'd mention this since I haven't heard anybody else talking about it. I haven't looked at all the metadata for all the photos, it seems to be randomly missing from some of them, but perhaps someone else will find something interesting in the other photos..
The title question.
Out of the default DBs and what's available in the default plugins repositories, what DB is everyone using for their Anime collection? Using TMDB and it's okay like 90% of the time, and I manually add the missing info on the remaining 10%, wondering if that's the best option or if someone else is closer to 100%.
As always, link before description.
GitHub Repo: https://github.com/e666666/TouhouSongDatabase
Link to download the whole program: https://github.com/e666666/TouhouSongDatabase/archive/refs/heads/main.zip
Link to download just the database: https://github.com/e666666/TouhouSongDatabase/raw/main/videos.json
First of all, no, I'm not saving the video itself, just the metadata. What's metadata, you ask? It's just the fancy way of saying that info in the video description, such as who sang it, the arrangement, or the original Touhou song name.
And why would you save it? Two reasons.
Enough gibberish, so how do you use it? To use the actual program, you will need Python 3 installed. After that, you have two ways of downloading the program, either with the main.zip link above or use Git to clone it. While using Git means another program to install, it simplifies updating later on since you can just "git pull" instead of redownloading it again.
Now, double-click Start.bat or Start.sh depending on your system, and hopefully, it will just work. There are two things that you might ever use, the first and third options.
The first allows you to search a specific property in the database, which is again a fancy of saying you can find what songs have something shared. In the second scenario above, I use it to find videos with the song title on other channels. While searching on YouTube might be faster, I find this method neater.
One thing is that you might be overwhelmed by all the
... keep reading on reddit β‘So far I've been excited with the progress that ProtonDrive has made and I'm liking the security model so far. I have a suggestion tho, can you add an option to remove the metadata of files in the application client itself before it gets encrypted in the client and uploaded in the ProtonDrive servers? Something like exiftools or something?
Edit: https://protonmail.uservoice.com/forums/284483-protonmail/suggestions/43341975-add-a-metadata-remover-to-protondrive-client-apps
It's been 2 days. This same library worked without issue with the official jellyfin docker image but based on comments I read here I decided to switch.
I started looking over the logs but I'm not seeing anything that jumps out. I'm going to blow away the library and start over but I was wondering if anyone running the latest linuxserver/jellyfin docker image has run into this issue and what steps might be good to take to troubleshoot.
This same library had no significant issues with metadata previously so I don't believe it's a file/directory naming convention issue.
I'm trying to utilize Amundsen for the metadata management in our shop. I managed to install it in an Ubuntu VM, but am stuck with making it work with my databases. Are there any easy-to-understand tutorials available for this tool?
It seems to me that maintaining Amundsen alone requires dedicated stuff, so would it be possible to suggest some other open-source alternatives that have similar functionalities?
I manage the r/RomanceBooks sci-fi recommendation database, and Iβm in the process of developing an additional database for fantasy and PNR recommendations! Info on the sci-fi database at this post. The idea is that you can filter through a variety of metadata to sort through books recommended by r/RomanceBooks readers. I think the fantasy database will need to be separate because each genre has some unique things that donβt apply to the other genre.
Iβd love to hear your thoughts on what metadata youβd like to be filterable on within a fantasy romance/PNR database. Both the sci-fi database and the upcoming fantasy databases are meant to be community resources, so Iβm open to any help or feedback youβd like to give!
Without further ado, here are my initial thoughts on metadata for a fantasy romance/PNR database. (Edit: Now with additions based on comments! Thank you!!! These suggestions in the comments are amazing!!)
Friend of a friend was just relieved of 2 monster boxes worth.
I'm trying to update and I'm getting this error:
Errors during downloading metadata for repository 'updates':
- Status code: 404 for http://fedora.zero.com.ar/linux/updates/34/Everything/x86_64/repodata/repomd.xml (IP: 190.111.255.148)
- Downloading successful, but checksum doesn't match. Calculated: 19681c.................
Error: Failed to download metadata for repo 'updates': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
EXIF (Exchangeable Image File Format) data is just a subset of data attached to images that could include things such as location, date, time, make/model of device, etc. Even if you have nothing to hide, there is no reason to leave data that can reveal personal info attached to a picture for any person/government/company to see.
Plus, it's very simple to delete.
For example on windows you just right click the image>properties>details>remove properties and person information>create a copy with all possible properties removed
Easy as that! There is also software you can download that will bulk delete metadata from multiple photos if you have a bunch you want to scrub and don't want to click through the process one at a time. I believe you can even get phone apps that will delete the data if you don't want to download the pictures to a computer.
With privacy becoming less and less common as surveillance and mass data collection become the new norm, I think deleting metadata from photos is just a quick and easy way to take some of your online freedom back.
I just got web browser based reader up and running for my ebooks and I've come to realise that my metadata for my books is quite messed up. I'm trying to figure out a good way to sort them all out, including number books in a series. I figured I'd pick the selfhosted community's brains on the matter.
The Goodreads API was retired on December 8th 2020. Mengting Wan from UCSD graciously scraped it before its demise. There may be other datasets besides this.
Jianmo Ni, also from UCSD, also scraped Amazon's reviews in 2019. Since these book metadata collections go together in their utility I've shared them as one.
Enjoy, readers, and please download for safe-keeping. APIs are closing, and precious publicly crowd-sourced meta information is disappearing from the net as more companies privatize their data and user contributions.
Description: We collected three groups of datasets: (1) meta-data of the books, (2) user-book interactions (users' public shelves) and (3) users' detailed book reviews. These datasets can be merged together by matching book/user/review ids.
Basic Statistics of the Complete Book Graph:
URL: https://sites.google.com/eng.ucsd.edu/ucsdbookgraph/home ; https://github.com/MengtingWan/goodreads
Description: Approximately 10,000,000 books are available on the site's archives, and these datasets are collecting from them. for requesting on the API, we used Goodreads python library,
URL: https://www.kaggle.com/bahramjannesarr/goodreads-book-datasets-10m
51,311,621 book reviews.
URL: http://deepyeti.ucsd.edu/jianmo/amazon/
Direct links: http://deepyeti.ucsd.edu/jianmo/amazon/categoryFiles/Books.json.gz ; http://deepyeti.ucsd.edu/jianmo/amazon/metaFiles/meta_Books.json.gz
One such service is CameraTrace. Under their FAQ is a partial list of supported camera models. Currently they support tracing on about 5 popular photo hosting sites including twitter.
Other similar services probably exist or are likely to in the future. As well as the expansion of the range of sites they can search from.
Hi all - I'm wondering if anyone else has dealt with a problem like this. I do digital marketing for a healthcare company with around 30 locations in Connecticut. In the past few months, I've noticed that, if you search for a clinic location (ex: "company + town"), the results appear with site links/sublinks that are for images on the location's landing page. Google seems to be pulling the metadata for those images and linking to them - and unfortunately those image links really aren't part of the site navigation, they're just media uploaded into the Wordpress backend, which is confusing to site visitors.
To stop this from happening, I added more structure to the site and reindexed, but it's been a month since doing that and the results are still off. Granted, for SEO purposes I made sure to name every image after the location and include the location name in the image description, so it seems like the baseline solution for this would be to remove the clinic location name from that metadata, but I'm certain that will negatively impact our SEO in other ways.
Any ideas on how to address this problem?
I have artwork on the wrong books, random unnamed files, total chaos for my library. Where can I get a guide to label my files, edit meta data or merge together files for a clean looking library. Is there anything like Filebot but for Audiobooks?
Hi everyone,
I have a little issue on jellyfin. My main hard drive (C:\) contains the folders for cache and metadata such as :
C:\ProgramData\Jellyfin\Server\cache
C:\ProgramData\Jellyfin\Server\metadata
Issue is my Hard drive is being completely full.
That's why I would like to migrate these folders on an external hard drive (F:\) for example, but I cant find a way to make it work properly.
I changed the path on the jellyfin platform, I copied/pasted the folders on my external hard drive, but when I check jellyfin and I click on the profile of an actor/director for example, it simply doesnt load.
Any clue ?
Thanks a lot.
I have written about it in Towards Data Science.
Thanks for all the questions folks! I really enjoyed answering them. If you have any other questions about data infrastructure in general you can reach me at rsm at datacoral dot co.
I've been experimenting with compute shaders, ant colony optimisation and Algorand recently.I've minted 15 NFTs of a 2:32 video of the simulation - I think it's cool.
The NFTs have no clawback or freeze and include an MD5 hash of the file to verify authenticity (See my other posts on this).
Distribution:
I'm holding back 6, three of which are for u/StonedFund, u/Mattazz (thanks for getting the ball rolling) and u/Anon_pepperoni11 (as payment in kind) if they want one.
I'm giving away 8 to the first 8 to DM me. EDIT: Lucky 8 have messaged, sending now.
I'm Auctioning one. Auction ends in 24hrs so 15:00 UTC.Along with the NFT I'm happy to provide the source code and a brief explainer to the highest bidder. DM me/post your bids below.I'll aim to keep this updated with the current leader.
Current Best Bid: 3 Algo
Thanks!
So, on windows you can remove a video (or multiple) by going into properties and clicking "remove metadata for this file" or whatever, and it'll remove the metadata attributed to it. To be clear, the name stays the same and isn't renamed, but the tags and metadata name is.
Reason I say is because I have videos I've added to Plex (that aren't apart of an agent) and have things like their release, dates, tags, authors and other such metadata on them. On windows, I knew how plus there were also programs if you didn't want to go thr right+click route. Have no idea how to on Linux (Ubuntu) though.
I would appreciate if there's a means to do it in bulk, through a GUI, since I have lots of files that would need this to be the case.
Any thoughts, anyone?
I have encountered a bug in Plex. If you have multiple versions of an audiobook read by different people, each with different cover art, the album art is getting merged by Plex. Here is an example with two versions of the Harry Potter books, and the metadata I use:
Artist: J.K. Rowling
Album: Harry Potter [03] and the Prisoner of Azkaban
Composer [Narrator]: Stephen Fry
Sort Album: Harry Potter [03] and the Prisoner of Azkaban (Fry)
Artist: J.K. Rowling
Album: Harry Potter [03] and the Prisoner of Azkaban
Composer [Narrator]: Jim Dale
Sort Album: Harry Potter [03] and the Prisoner of Azkaban (Dale)
These two albums show up separately (rather than being merged), thanks to the βSort Albumβ which differs for each. Unfortunately, the cover art for each album, which is different for each, is showing up the same for both.
Does any one know how to fix this? Is there a workaround?
I really want some unambigous way to link the token to the underlying data. So putting the hash there seems just right. SHA256 produces a 32B hash so I tried pasting the usual hex string - no luck - algodesk complains about it not being 32B.
After some experimentation I realised the site would only accept 32 characters (rather than the bytes the hex string represents). So I tried MD5, a 16B hash which makes for 32 characters. Algodesk accepts it but hangs on creating the asset.
Has anyone successfully filled the Metadata Hash field with algodesk?
My guess is that, behind the scenes, algodesk is converting the hex string to binary but the input filter is counting characters, not the binary they represent.
Found a lot of different apps but not a single one that does all of this properly.
OS - Windows 10
Can be paid or free, I don't mind.
Thanks
So I'm watching Shark tank and it seems every week they come out with an episode, its simply listed as "Episode 12" or whatever episode it is.
Sonarr downloads it and names it:
Shark Tank - S12E12 - Episode 18 WEBDL-720p.mkv" for example.
Then a week later, the metadata is updated and the episode name changes to include the new name. For example:
Shark Tank - S12E12 - Rule Breaker, MountainFlow Eco-Wax, Yono Clip, NightCap (2009) - [720p.WEB.AVC.5.1 EAC3.KOGi]
If I check in Sonarr, I can see the name of the episode is also updated there, great! But if I goto the actual file, I see that the filename does not get updated. Its still just listed as Episode 12.
How do I get Sonarr to rename files on the computer after getting updated metadata for them?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.