A list of puns related to "Word representable graph"
What does it mean in a k-uniform word for letters to be alternating?
For example, take this 2-uniform word for instance: 2 3 1 2 1 3. Why does 2 and 3 alternate, 1 and 2 alternate but 1 and 3 does not alternate?
To me, alternating means if there is (2,3) then there is (3,2). But clearly that is not what it means here.
I'm a mathematician finding a renewed interest in economics. I've been reading about market efficiency, and I can't help but notice that the Efficient Market Hypothesis feels a lot like a statement of the (enriched) Yoneda lemma.
To substantiate this connection, we would need to model a market using enriched categorical structures. Conventional econometrics seem insufficient to give the idea any kind of edge/sharpness/precision. People may hastily construct models based on naive numerical market indicators, obscuring more fundamental patterns of behavior and organization. I'm still getting up-to-speed on the contemporary conversation surrounding this subject, so I will wait until someone more informed can lend their insight. Instead, I will speak more from the category theory side of things in order to justify these beliefs of mine.
Categories are usually introduced as "universes of formal objects". The action comes from the scope of formal transformations they serve to collect and organize. Matrices organize into categories, where you can multiply matrices with matching dimensions. The whole category is often treated as a sort of space, and the transformations it contains are paths running around inside of it. Sometimes you can take different paths to reach the same result.
The Yoneda lemma says that you can take any point in one of these spaces, and the collection of all the paths that start (or end) at that place manage to store all of the information "inside" of it. In this light, the Efficient Market Hypothesis would be a corollary of a statement like, "the utilities of a market, and all of the transactions through which they may be exchanged, form a category." If you represent a market this way, any inefficiency of the market may be articulated in terms of some potential utility within the representation. The abstraction just sweeps inefficiencies under the rug.
I think the existing arguments for the EMH are already implicit applications of the Yoneda lemma. Articulating them in this language may clarify their inadequacy and offer new concepts of risk and (in)efficiency. Maybe it starts with the failure of real markets to be modeled as CATEGORIES of utility exchange. Perhaps this line of description can help explain the relationships between market exchange and factors like reciprocity. Perhaps the risk of a particular transaction can be expressed in the language of "obstructions to representability" of some presheaf on a utility category.
Hi r/machinelearning,
For the past 1.5 years I have been organizing an online journal club on the topic of Graph Representation Learning. We meet either weekly or fortnightly via Zoom to discuss a relevant paper.
We are a small and friendly group and we would like to invite others who have similar interests to join us.
We meet on Thursdays, 6:00pm-7:30pm, Canada/Pacific timezone, and out next meeting is on January 20, 2022.
You are welcome to join us here.
Cheers!
I'm trying to classify a certain family of Boolean functions, and need to represent the function as a graph. Is there any well-known graph representation for a Boolean function that captures the information that it is Boolean?
I'm presently using the variables as vertices with an edge between two if they are present in the same monomial. This does not take into account that the function is Boolean. I thought of 2-colorings of the vertices of a hypercube, but that does not really put any restriction on the graph. For my problem, I would need to use some property of the graph that results from the function being Boolean. Can someone provide some ideas to do this?
When reading different articles, it's natural to be curious about the data and charts being presented in word or PDF format. It would be useful to be able to snoop around. Can we develop a new format for reports that builds in a dynamic component? Currently, you have to write a lot of code or develop an accompanying dashboard or spreadsheet which isn't the logical next step.
The following is mentioned in Wiki:
>The main operation performed by the adjacency list data structure is to report a list of the neighbors of a given vertex. Using any of the implementations detailed above, this can be performed in constant time per neighbor. In other words, the total time to report all of the neighbors of a vertex v is proportional to the degree of v
Why is it O(deg(v)) instead of O(1)? Can't we just directly access the list associated with a particular vertex?
is there any possible way to export a Geogebra Interactive 2D/3D graph (for ex:https://www.geogebra.org/m/QsKqNSEd ) and attach it in Word and be able to directly manipulate it from there?I know it is possible to add 3D Models into a Word document (with .obj format). What about Geogebra? (if there is no built-in tool, do you know any existing Add-in for Word to do it?) Thank you!
Hi all,
Presenting a general framework for Graph Neural Networks to learn positional encodings (PE) alongside structural representations, applicable to any MP-GNNs, including (Graph) Transformers.
"Graph Neural Networks with Learnable Structural and Positional Representations"
Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio and Xavier Bresson.
Paper: https://arxiv.org/abs/2110.07875
Code: https://github.com/vijaydwivedi75/gnn-lspe
#2minutebrief
Nodes in a graph do not have canonical positional information, like the global word positions in a sentence. This gives rise to limitations such as the lack of (global) structural information when message-passing GNNs are applied to learn on graphs. As a result, such models cannot distinguish isomorphic nodes or other graph symmetries.
In this work, we consider this problem of graph PEs and propose a framework named LSPE that can be used with any MP-GNNs to learn positional and structural feature representations at the same time, thus effectively capturing the two essential properties and tuning these w.r.t. to the task at hand.
Fig. The general MPGNNs-LSPE architecture.
In brief, LSPE enhances capabilities of an MP-GNN in the following way:
Above simple steps improves several MP-GNNs and Transformer-GNNs providing a performance boost of up to 64% on molecular datasets. At the same time, we retain the efficient linear complexity of message-passing while generating more expressive node embedding.
More background
... keep reading on reddit β‘Content Warning: This is going to be discussing ideas around disability representation in sff. Some of the tools used are dated, and thus may use ableist terminology.
[Also obligatory link so nothing too weird comes up in the background]( https://imgur.com/a/6kJQSAP )
So to preface. Iβm not here to discuss whether sff should have disability representation. Iβm taking it as a given that yes, it should, and it should try to be representative of real world populations. If thatβs a conversation you want to have, please go do it somewhere else. Anyone who comments something to the effect of βdisabled people shouldnβt exist in sff because itβs meant to be eScApIsTβ will be blocked and reported because rule 1 is βbe kindβ and saying an entire group of people donβt deserve escapism too is not being kind.
It's come up several times over the past couple of months that disability representation in fantasy isn't reallyβ¦. representative of the kinds of disabilities real people have. This isn't, as far as I've seen, anything that anyone's sat down to count, rather a general trend that the community has noticed. So, I decided to see if I could figure out if this is purely a myth or something actually based in reality.
Methodology wise, there are lots of holes in this, so you have been warned. This was the method that I had available to me as an individual who didn't want to have to trawl through ASOIAF to spot every single disabled character. If you don't like it, you can do it. This is not a particularly rigorous or academic study. I just wanted to see if there were any broadly evident trends in sff.
I took a bunch of rec threads from r/fantasy mentioning disability (available in the master document), and plugged them all into a google sheets thingey. I sorted them based on the type of disability mentioned using an adjusted version of the, somewhat old fashioned but still functional, IDEA system of categorisation (please note I am not american, just using this categorisation because it breaks things down more finely than other tools). Recommendations where the specific disability was not mentioned were not included in the master doc. Rec
... keep reading on reddit β‘This*
I was looking at Thomas Kipf's page on Graph Convolutional Networks and on the page, he has a neat animation of how the node features are forming clusters according to their classes. Here is a direct link to the video: https://tkipf.github.io/graph-convolutional-networks/images/video.mp4
This form of visualization can be really helpful to see if a training process is being adversely affected by too much oversmoothing. However, I can't find the code for it. What python libraries are good for plotting networks like this?
I can make a 2D UMap plot showing the node clusters, but I can't figure out how to draw the edges.
Here's like a political compass I was thinking of.
Not sure if I can really count that has like a unit of measurement, but maybe there's a better way to describe something that relates to two different spectrums?
Either way, if there's a word/some way to describe like a graph/this compass for general usage (not just specifically a political compass), that'd be great. Hopefully this makes sense, thanks!
Update below
I may be missing something obvious here, but I'm trying to implement something like Conway's Life over an infinite field, using the Store
comonad and Representable
functors.
I got it working using a fixed-size Matrix
following the outline in Chris Penner's article.
Now I want to extend that to work on an infinite field, but starting from a finite starting arrangement. I'll use a Map (Int, Int) Bool
to record the contents of each cell at a particular position.
I can fudge the "infinite" part of the field by having the Map
return False
for every cell not already specified. I can also ask the Map
for the cells it already knows about, and use that as the basis for the experiment
to calculate the next generation.
My outline code looks like this:
import qualified Data.Map.Strict as M
type Coord = (Int, Int)
type Grid = M.Map Coord Bool
type StoredGrid = Store (M.Map Coord) Bool
type Rule = StoredGrid -> Bool
type GridCache = S.Set Grid
instance Distributive (M.Map Coord) where
distribute = distributeRep
instance Representable (M.Map Coord) where
type Rep (M.Map Coord) = Coord
index m c = M.findWithDefault False c m
tabulate = M.empty
This fails to compile with two errors, both in the Representable
instance.
β’ Couldn't match type βaβ with βBoolβ
Expected type: M.Map Coord Bool
Actual type: M.Map Coord a
β’ Relevant bindings include
index m c = M.findWithDefault False c m
β’ Couldn't match expected type β(Rep (M.Map Coord) -> a)
-> M.Map Coord aβ
with actual type βM.Map k0 a0β
β’ Relevant bindings include
tabulate = M.empty
Any ideas how to fix this?
Thanks all for the suggestions and ideas. However, a combination of limits in my approach, and an underlying asymmetry in the grid, conspired to make me abandon this approach and do something much more direct.
You can read what I finally did on my blog.
Hi r/deeplearning,
For the past 1.5 years I have been organizing an online journal club on the topic of Graph Representation Learning. We meet either weekly or fortnightly via Zoom to discuss a relevant paper.
We are a small and friendly group and we would like to invite others who have similar interests to join us.
We meet on Thursdays, 6:00pm-7:30pm, Canada/Pacific timezone, and out next meeting is on January 20, 2022.
You are welcome to join us here.
Cheers!
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.