A list of puns related to "Data Format"
I came across this in the wild and I have no idea what they are talking about. I am fairly fresh out of school and have only been in a DBA position for a year or so now, so I'm trying to see if this is a gap in my knowledge and if so, how I can learn more about it.
Any good resources for more current database antipatterns or best practices? (I saw a thread here from like 9 years ago recommending Joe Celko's books, but the most recent of those is from 2014?)
In class, and just in general, we were required to import/export data between a variety of file types including .csv with never a mention of .csv being something to avoid. Are .csv files really antipatterns? I am guessing they mean json as a modern format?
More info, the data in question is in a *RDBMS database.
*edited to relational database as my brain originally was like opposite of noSQL obviously is SQL
Hi, this might not be the correct reddit for this question, apologies in advance, but on the other hand it still might be because of the link with WASM and the general focus on performance in general
I have a situation where I need to transfer data between a (rust) server and a client every time a user does something (which can happen very frequently). The issue I have is that the dataset that is sometimes transferred grows quite large, which is becoming unacceptable sometimes.
Im looking at several strategies to optimise this.
One thing that struck me is that JSON has a lot of waste. Is there a way to 'compress' JSON to its bare bone data, and then decompress it server side?
Example struct
{
name: "name",
height: 1,
length: 2,
polygon: [
{ x: 0, y: 0 },
{ x: 0, y: 0 },
{ x: 0, y: 0 },
]
}
In memory this whole structure only represents a few bytes if you leave out all the parentheses, key names, spaces etc. For example a string like this is already better "name 1 2 [[0 0][0 0][0 0]]"
For example, if one had a source file containing 100 transformed instances of the same 3D model but that information is just vertices within the source file is a model built to recognize that data redundancy? Are algorithms for various types of information like this updated on a regular basis and integrated into the popular compression methods?
I've created a simple table with different-colored rows for data entry for my job. Whenever I filter results, all rows change colors (each row goes with its own color). Would it be possible to lock the table so that the data is sorted but not the table formats?
Also, one of the columns I paste data in uses an HTML link. It always changes to a specific format. Would it be possible, in the same manner, to lock the table format so that I could paste the link without the cell format changing?
Thanks!
My assignment is asking me to graph two separate sets of data(D.O. and Temp.) as a function of another(Water Depth). I think that the best way to display said information would be a scatter plot, but I can't figure out how to format it according to the directions. So I was wondering if I had the wrong graph idea completely or was just doing something wrong with my scatter plot.
The instructions tell me to "make the graph as intuitive as possible" and "figure out how to put depth on the Y-axis with zero at the top and have the X-axis cross the Y-axis at O (i.e., the top)". I have been trying for a couple hours now and feel pretty stuck. I included a screenshot of the instructions and the data I am using. This is my first time in excel so I apologize in advance if I am missing something obvious.
I think I have a firmware issue on an SSD that was also the boot drive (single partition) and the only way I can fix it is by formatting it. Don't want to lose all my data but something tells me if I just copy it and paste it back I'll end up with some deep rooted Windows problems that will make my OS crappy forever.
Hi everyone.
I want to read some variable data from web services into libreoffice calc using =WEBSERVICE(βhttps://addressβ)
function.
But it works only if data in XML format.
And some of services create only in JSON format. WEBSERVICE returns Err:540 when data is in JSON format.
How to read data in JSON format?
I region changed from japan to us, but i didnβt realize that it would lock me out of playing games, such as Monster Hunter, Super Mario Maker, etc.
if i do a complete system data delete, will my 3ds return back to normal, allowing me to play those games i currently canβt again?
Hello all, hopefully the pictures make the question easier. The dates are currently in a long format with times, how can I change them to "DDMMMYY" ? I tried text("DateRaised","ddmmmyy") in the groupby part of the formula, but it just errored out.
AddColumns(GroupBy(Filter('SOURCE','Date Raised' >= DatePicker1.SelectedDate && 'Date Raised' <= DatePicker1_1.SelectedDate),"DateRaised","GrpRaised"),"Raised Per Day",CountRows(GrpRaised))
https://preview.redd.it/glzu956dfqd81.jpg?width=183&format=pjpg&auto=webp&s=0592fa24210344b5eb317336823d4002ec6b79af
https://preview.redd.it/9pzxi92cfqd81.jpg?width=1023&format=pjpg&auto=webp&s=2d190408b031aa4817e532d847bd3ee35014229c
I need some help formatting my streaming data as provided by Spotify into excel as Iβd like to produce my own version of wrapped. However, the format the data currently comes in is not particularly useful. On my phone so not sure how to post a photo but it comes as a json file, I opened this in my notepad and it reads:
{
βEnd timeβ : βwhatever the end time isβ
3 similar lines for artist name, track name and msplayed
}
Is there any way to convert this when copied into excel, I tried delimiting but that didnβt do much good
Hello,
My data includes these variables: Condition1Pre, Condition1Post, Condition2Pre, Condition2Post. So two conditions, two repeated measures pre to post. Each participant has all conditions, both of them. It is like a two way ANOVA study. I will conduct a mixed model study.
data_long <- pivot_longer(data, cols = c(C1Pre, C1Post),
names_to = "C1", values_to = "Value")
When I used the code above, I can do it one by one for each condition but, I have to create also a time factor.
data_long <- pivot_longer(data, cols = c(C1Pre, C1Post),
names_to = "C1", values_to = "Value",
cols = c(C1Pre, C1Post),
names_to = "Time", values_to = "Value")
I tried this one but it leads to an error: Error in pivot_longer(data, cols = c(C1Pre, C1Post), names_to = "C1", : formal argument "cols" matched by multiple actual arguments
How can I do this all together including condition factors (c1, c2) and time factors (pre, post)?
Hello, I've run into a bit of a wall here and I have no idea how to solve this issue, if it even is possible.
I'm attempting to create a map of sorts for a RPG campaign, and this map is composed of ~500 images individually inserted into cells, naturally forming a sort of "grid" to the map, the color of which is determined by the cell background color. I also created a lookup formula to pull data from another sheet to show players which cell they are currently in on the map.
Now here comes the issue. I'm trying to conditionally format the cells of said map, so that whenever the lookup formula displays the cell the players are currently in, the corresponding cell changes its background color to "highlight" where they are on the map, just to make it a little easier. There's no text in the map cells as I mentioned, just images. If I input any text those images are erased, at least as far as I understand.
Any help is greatly appreciated, if this turns out to be an unsolvable case then that's okay as well. Just can't seem to wrap my head around it.
Hi. Is there any chance to extract data automatically from multiple word documents (form fill up document that has same template ) and filling it in excel to their respective heading columns.
I work in an industry where i get hundreds of word forms which i manually update them in excel. Please help me out. Thanks.
Hello,
I started a new /collections.meta project that holds meta datasets (attributes & more) in the comma-separated values (csv) format for the Pixel Art Collections.
The first meta datasets (attributes& more) include blockydoge.csv, goodbyepunks.csv, mafiapunks.csv and more.
Questions and comments welcome.
Note, the order follows the all-in-one image composite in the Pixel Art Collection, that is, the zero-based id numbers in the datasets match the tile id numbers.
I've been looking around for a while, but I did not find one. Is there even one existing?
So to clear things up: I mean an open source data format which can is used (like imported/exported) by many apps. The format should cover pasted images, handwriting, paper format etc.
So i want to format the console so i canngo back in updates on minecraft and im wondering... is it worth it cause i have my nnid and game data from wii and wii u so is it worth it.
Are you pulling from APIs (REST, GraphQL, or SOAP), Files (Flat Files, JSON, XML, etc), databases (Relational, Document, etc) or something else?
I've tried putting the 120gb drive back in but it seems to be already formatted and after logging in, I'm met with a blank screen with all my programs unable to open. Not even, file manager. Which is weird as I confirmed every file from the first drive was cloned to the second drive
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.