Could the 1967 adaptation of The Hobbit be in the public domain? I did my small research, nothing that either confirms or denies.
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/naranjaPenguin21
πŸ“…︎ Dec 14 2021
🚨︎ report
[D] What are techniques for few shot domain adaptation?

I am trying to do domain adaptation from synthetic to real images. The task is anomaly detection,

Usually the problem for domain adaptation is that there are a lot of target images without labels. In my case I only have a few target images with labels. Therefore, the common technique of creating pseudo labels for the target domain is not useful.

My current idea is to do some kind of style transfer from the synthetic images to the real images. (CyCADA, Contrastive unpaired image to image)
Do you have other ideas of domain adaptation where I have a lot of source images but only a few target images (but with ground truth)?

I would be glad to be pointed in the right direction.

πŸ‘︎ 6
πŸ’¬︎
πŸ“…︎ Dec 10 2021
🚨︎ report
Quiz answers: Eigenfaces, Domain adaptation, Causality, Manifold Hypothesis, Denoising Autoencoder youtu.be/yPXNQ6Ig7hQ
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/AICoffeeBreak
πŸ“…︎ Dec 26 2021
🚨︎ report
A new dataset for text classification and domain adaptation in social media

A dataset of ~22,500 labeled documents across four different domains. You can find it here:

https://github.com/p-karisani/illness-dataset

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/payam_ka
πŸ“…︎ Dec 14 2021
🚨︎ report
Domain Adaptation for Image Translation

Consider two domains, cartoon and real world. I want my final network to take as input a rainy image and output the corresponding clear image. I have paired rainy-clear dataset in the cartoon domain. And I have unpaired rainy-clear images in the real domain. My network is doing well in the cartoon domain. How do I make it generalize well to the real domain. Could you direct me to research papers that do something similar?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Mundane_Gene_5473
πŸ“…︎ Dec 20 2021
🚨︎ report
"The Great Gatsby" is entering the public domain and we’re due for a Muppet adaptation theverge.com/tldr/2020/12…
πŸ‘︎ 48k
πŸ’¬︎
πŸ‘€︎ u/inthetownwhere
πŸ“…︎ Jan 01 2021
🚨︎ report
Unsupervised Domain Adaptation: A Reality Check arxiv.org/abs/2111.15672v…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ShareScienceBot
πŸ“…︎ Dec 03 2021
🚨︎ report
Postdoc position in Strasbourg: DL, Domain Adaptation, Multi-Modal Representations groups.google.com/g/ml-ne…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ml_news_bot
πŸ“…︎ Nov 27 2021
🚨︎ report
The title of Kenneth Branagh's 1994 adaptation of Frankenstein is actually a misnomer. The novel was in the public domain prior to production of the film, meaning it's as much my Frankenstein as it is Mary Shelley's. My lawsuit against TriStar Pictures, unfortunately, went nowhere
πŸ‘︎ 34
πŸ’¬︎
πŸ‘€︎ u/KscILLBILL
πŸ“…︎ Sep 28 2021
🚨︎ report
"The Great Gatsby" is entering the public domain and we’re due for a Muppet adaptation theverge.com/tldr/2020/12…
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/inthetownwhere
πŸ“…︎ Jan 01 2021
🚨︎ report
[self-promotion] [Dataset] [Project] Domain adaptation text recognition/OCR dataset (MSDA) and benchmark: Multi-source domain adaptation dataset for text recognition /r/AcademicCommunity/comm…
πŸ‘︎ 13
πŸ’¬︎
πŸ“…︎ Sep 30 2021
🚨︎ report
Quick and Easy GAN Domain Adaptation explained: Sketch Your Own GAN by Sheng-Yu Wang et al. 5 minute summary

Sketch Your GAN domain adaptation

Want to quickly train an entire GAN that generates realistic images from just two quick sketches done by hand? Heng-Yu Wang and team got you covered! They propose a new method to fine-tune a GAN to a small set of user-provided sketches that determine the shapes and poses of the objects on the synthesized images. They use domain adversarial loss and different regularization methods to preserve the original model's diversity and image quality.

The authors motivate the necessity of their approach mainly with the fact that training conditional GANs from scratch is simply a lot of work: you need powerful GPUs, annotated data, careful alignment, and pre-processing. In order for an end-user to generate images of a cats in a specific pose a very large number of such images is required, however with the proposed approach only a couple of sketches and a pretrained GAN is needed to create a new GAN that synthesizes images resembling the shape and orientation of sketches, and retains the diversity and quality of the original model. The resulting models can be used for random sampling, latent space interpolation and photo editing.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

Sketch Your Own GAN explained

[Full Explanation/ Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[SupCon]

πŸ‘︎ 10
πŸ’¬︎
πŸ“…︎ Aug 14 2021
🚨︎ report
Instance Adaptive Self-training for Unsupervised Domain Adaptation (ECCV 2020)

The divergence between labeled training data and unlabeled testing data is a significant challenge for recent deep learning models. Unsupervised domain adaptation (UDA) attempts to solve such a problem. Recent works show that self-training is a powerful approach to UDA. However, existing methods have difficulty in balancing scalability and performance. In this paper, we propose an instance adaptive self-training framework for UDA on the task of semantic segmentation. To effectively improve the quality of pseudo-labels, we develop a novel pseudo-label generation strategy with an instance adaptive selector. Besides, we propose the region-guided regularization to smooth the pseudo-label region and sharpen the non-pseudo-label region. Our method is so concise and efficient that it is easy to be generalized to other unsupervised domain adaptation methods. Experiments on 'GTA5 to Cityscapes' and 'SYNTHIA to Cityscapes' demonstrate the superior performance of our approach compared with the state-of-the-art methods.

Code

Paper

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Sep 30 2021
🚨︎ report
Turn your dog into Nick Cage! StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators by Rinon Gal et al. explaned in 5 minutes

Just look at these crazy prompts!

How insane does it sound to describe a GAN with text (e.g. Human -> Werewolf) and get a SOTA generator that synthesizes images corresponding to the provided text query in any domain?! Rinon Gal and colleagues leverage the semantic power of CLIP's text-image latent space to shift a pretrained generator to a new domain. All it takes is a natural text prompts and a few minutes of training. The domains that StyleGAN-NADA covers are outright bizzare (and creepily specific) - Fernando Botero Painting, Dog β†’ Nicolas Cage (WTF πŸ˜‚), and more.

Usually it is hard (or outright impossible) to obtain a large number of images from a specific domain required to train a GAN. One can leverage the information learned by Vision-Language models such as CLIP, yet applying these models to manipulate pretrained generators to synthesize out-of-domain images is far from trivial. The authors propose to use dual generators and an adaptive layer selection procedure to increase training stability. Unlike prior works StyleGAN-NADA works in zero-shot manner and automatically selects a subset of layers to update at each iteration.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

StyleGAN-NADA explained

[Full Explanation / Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[[Sketch Your Own GAN](https://t.me/casual_gan/81

... keep reading on reddit ➑

πŸ‘︎ 19
πŸ’¬︎
πŸ“…︎ Aug 15 2021
🚨︎ report
Alright everybody, time to exercise the creative side of our brains a little bit: what would be "the best" adaptation that could be done of The Great Gatsby now that it has become public domain?
πŸ‘︎ 441
πŸ’¬︎
πŸ‘€︎ u/Coolnametag
πŸ“…︎ Jan 05 2021
🚨︎ report
[N] Universal Domain Adaptation Challenge (VisDA-21) at NeurIPS'21

Interested in Universal Domain Adaptation? Join the VisDA 2021 Neurips competition!

This challenge will test how well models can (1) adapt to several distribution shifts and (2) detect unknown unknowns.

Top 3 teams will win $$$$ (2$k 1st/ $500 2nd and 3rd) in the form of VISA gift cards.

More details (and registration) at the website and below

-----------------------------------------------------------------------------

Progress in machine learning is typically measured by training and testing a model on the same distribution of data, i.e., the same domain. However, in real-world applications, models often encounter out-of-distribution data, such as novel camera viewpoints, backgrounds or image quality. The Visual Domain Adaptation (VisDA) challenge tests computer vision models’ ability to generalize and adapt to novel target distributions by measuring accuracy on out-of-distribution data.

The 2021 VisDA competition is our 5th time holding the challenge! [2017], [2018], [2019], [2020]. This year, we invite methods that can adapt to novel test distributions in an open-world setting. Teams will be given labeled source data from ImageNet and unlabeled target data from a different target distribution. In addition to input distribution shift, the target data may also have missing and/or novel classes as in the Universal Domain Adaptation (UniDA) setting [1]. Successful approaches will improve classification accuracy of known categories while learning to deal with missing and/or unknown categories.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/JustSayNoToSummer
πŸ“…︎ Jul 30 2021
🚨︎ report
How do I copyright an adaptation (script) that is based on material from the public domain?

So, I wrote a script based on a work by Geoffrey Chaucer and I got to the part about if it’s based on published work. Technically, it was but I can’t find the oldest publication and if it’s public domain, do I have to even say it’s based a published work?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Righteousslayer
πŸ“…︎ Aug 07 2021
🚨︎ report
[R] StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators arxiv.org/abs/2108.00946
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/bert4QA
πŸ“…︎ Aug 19 2021
🚨︎ report
The β€˜Great Gatsby’ Glut: F. Scott Fitzgerald’s classic novel about America and aspiration is now in the public domain, so new editions, as well as a graphic novel and a zombie adaptation, have gotten the green light. nytimes.com/2021/01/14/bo…
πŸ‘︎ 174
πŸ’¬︎
πŸ‘€︎ u/drak0bsidian
πŸ“…︎ Jan 15 2021
🚨︎ report
For domain adaptation (DA) research, besides showing the t-sne visualization of the embedding, is there any other way to prove the efficacy of the model?

Hi, new to DA.

Pretty much what the title said.

Also, have you ever seen a DA paper not showing the embedding visualization?

and how doubtful you will be when you see one?

Thanks.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/xiikjuy
πŸ“…︎ Jul 21 2021
🚨︎ report
Help) i'm looking for paper about DA(both of Domain adaptation,Data augmentation)

i'm looking for a paper about DA ( both of Domain Adaptation, Data Augmentation)

i heard that paper has a table which shows the impact of each method of data Augmentation(Flip/Rotate/Shift etc....) on Domain Adaptation (i think that paper wants to tell data augmentation is enough for domain adaptation)

i searched on Arxiv, google with these keyword(domain adaptation, data augmentation, domain invarience etc...) but i couldn't find this paper

if you have good keyword or site or paper, recommend me please

sorry for bad english

Thank you.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/landu2
πŸ“…︎ Jun 18 2021
🚨︎ report
You are welcome to submit your paper to the MICCAI BrainLes(s) Workshop 2021 and related challenges ( Brain Tumor Segmentation, Federated Tumor Segmentation, Cross-Modality Domain Adaptation, and Quantification Uncertainty in Biomedical Image Quantification) http://www.brainlesion-workshop.org/ reddit.com/gallery/o6kmxm
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/alecrimi
πŸ“…︎ Jun 23 2021
🚨︎ report
[D] BERT Finetuning/Domain Adaptation

Usually when bert is finetuned on downstream tasks (task adaptation) only small dataset (500 samples) is required. I am wondering how much data do we need to finetune it on a specific domain like Finance (domain adaptation) . And do you expect the resulting model to outperform original Bert on downstream tasks in Financial domain ? Thanks in advance..

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/MaJhole007
πŸ“…︎ Apr 19 2021
🚨︎ report
Rumor: Ayakashi Triangle by "To Love-Ru" mangaka Yabuki Kentarou might be receiving an anime adaptation based upon found domain. twitter.com/snkynews/stat…
πŸ‘︎ 19
πŸ’¬︎
πŸ‘€︎ u/Turbostrider27
πŸ“…︎ Mar 10 2021
🚨︎ report
Domain Adaptation problems in Tranfer learning

https://medium.com/nerd-for-tech/domain-adaptation-problems-in-machine-learning-ddfdff1f227c

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Vivek_Murali
πŸ“…︎ Jul 01 2021
🚨︎ report
Domain Adaptation problems in Machine learning medium.com/nerd-for-tech/…
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Vivek_Murali
πŸ“…︎ Jul 01 2021
🚨︎ report
I miss the old style Muppet adaptations of public domain movies. Which public domain story would you love to see adapted by The Muppets?
πŸ‘︎ 38
πŸ’¬︎
πŸ‘€︎ u/VisDev82
πŸ“…︎ Jan 01 2021
🚨︎ report
The Great Gatsby is entering the public domain and we're due for a Muppet adaptation theverge.com/platform/amp…
πŸ‘︎ 192
πŸ’¬︎
πŸ‘€︎ u/LEGO_Zelda
πŸ“…︎ Dec 31 2020
🚨︎ report
[R] A new dataset for text classification and domain adaptation in social media

A dataset of ~22,500 labeled documents across four different domains. You can find it here:

https://github.com/p-karisani/illness-dataset

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/payam_ka
πŸ“…︎ Dec 14 2021
🚨︎ report
[D] Turn your dog into Nick Cage! StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators by Rinon Gal et al. explaned in 5 minutes

It's a dog, it's a Nick Cage ... it's StyleGAN-NADA

How insane does it sound to describe a GAN with text (e.g. Human -> Werewolf) and get a SOTA generator that synthesizes images corresponding to the provided text query in any domain?! Rinon Gal and colleagues leverage the semantic power of CLIP's text-image latent space to shift a pretrained generator to a new domain. All it takes is a natural text prompts and a few minutes of training. The domains that StyleGAN-NADA covers are outright bizzare (and creepily specific) - Fernando Botero Painting, Dog β†’ Nicolas Cage (WTF πŸ˜‚), and more.

Usually it is hard (or outright impossible) to obtain a large number of images from a specific domain required to train a GAN. One can leverage the information learned by Vision-Language models such as CLIP, yet applying these models to manipulate pretrained generators to synthesize out-of-domain images is far from trivial. The authors propose to use dual generators and an adaptive layer selection procedure to increase training stability. Unlike prior works StyleGAN-NADA works in zero-shot manner and automatically selects a subset of layers to update at each iteration.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

StyleGAN-NADA explained

[Full Explanation / Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[[Sketch Your Own GAN](https:

... keep reading on reddit ➑

πŸ‘︎ 5
πŸ’¬︎
πŸ“…︎ Aug 16 2021
🚨︎ report
Turn your dog into Nick Cage! StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators by Rinon Gal et al. explaned in 5 minutes

It's a dog, it's a Nick Cage ... it's StyleGAN-NADA

How insane does it sound to describe a GAN with text (e.g. Human -> Werewolf) and get a SOTA generator that synthesizes images corresponding to the provided text query in any domain?! Rinon Gal and colleagues leverage the semantic power of CLIP's text-image latent space to shift a pretrained generator to a new domain. All it takes is a natural text prompts and a few minutes of training. The domains that StyleGAN-NADA covers are outright bizzare (and creepily specific) - Fernando Botero Painting, Dog β†’ Nicolas Cage (WTF πŸ˜‚), and more.

Usually it is hard (or outright impossible) to obtain a large number of images from a specific domain required to train a GAN. One can leverage the information learned by Vision-Language models such as CLIP, yet applying these models to manipulate pretrained generators to synthesize out-of-domain images is far from trivial. The authors propose to use dual generators and an adaptive layer selection procedure to increase training stability. Unlike prior works StyleGAN-NADA works in zero-shot manner and automatically selects a subset of layers to update at each iteration.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

StyleGAN-NADA explained

[Full Explanation / Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[[Sketch Your Own GAN](https

... keep reading on reddit ➑

πŸ‘︎ 13
πŸ’¬︎
πŸ“…︎ Aug 16 2021
🚨︎ report
Universal Domain Adaptation Challenge (VisDA-21) at NeurIPS'21

Interested in Universal Domain Adaptation? Join the VisDA 2021 Neurips competition!

This challenge will test how well models can (1) adapt to several distribution shifts and (2) detect unknown unknowns.

Top 3 teams will win $$$$ (2$k 1st/ $500 2nd and 3rd) in the form of VISA gift cards.

More details (and registration) at the website and below

-----------------------------------------------------------------------------

Progress in machine learning is typically measured by training and testing a model on the same distribution of data, i.e., the same domain. However, in real-world applications, models often encounter out-of-distribution data, such as novel camera viewpoints, backgrounds or image quality. The Visual Domain Adaptation (VisDA) challenge tests computer vision models’ ability to generalize and adapt to novel target distributions by measuring accuracy on out-of-distribution data.

The 2021 VisDA competition is our 5th time holding the challenge! [2017], [2018], [2019], [2020]. This year, we invite methods that can adapt to novel test distributions in an open-world setting. Teams will be given labeled source data from ImageNet and unlabeled target data from a different target distribution. In addition to input distribution shift, the target data may also have missing and/or novel classes as in the Universal Domain Adaptation (UniDA) setting [1]. Successful approaches will improve classification accuracy of known categories while learning to deal with missing and/or unknown categories.

πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/JustSayNoToSummer
πŸ“…︎ Aug 10 2021
🚨︎ report
Quick and Easy GAN Domain Adaptation explained: Sketch Your Own GAN by Sheng-Yu Wang et al. 5 minute summary

Sketch Your GAN domain adaptation

Want to quickly train an entire GAN that generates realistic images from just two quick sketches done by hand? Heng-Yu Wang and team got you covered! They propose a new method to fine-tune a GAN to a small set of user-provided sketches that determine the shapes and poses of the objects on the synthesized images. They use domain adversarial loss and different regularization methods to preserve the original model's diversity and image quality.

The authors motivate the necessity of their approach mainly with the fact that training conditional GANs from scratch is simply a lot of work: you need powerful GPUs, annotated data, careful alignment, and pre-processing. In order for an end-user to generate images of a cats in a specific pose a very large number of such images is required, however with the proposed approach only a couple of sketches and a pretrained GAN is needed to create a new GAN that synthesizes images resembling the shape and orientation of sketches, and retains the diversity and quality of the original model. The resulting models can be used for random sampling, latent space interpolation and photo editing.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

Sketch Your Own GAN explained

[Full Explanation/ Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[SupCon]

πŸ‘︎ 11
πŸ’¬︎
πŸ“…︎ Aug 11 2021
🚨︎ report
[D] Quick and Easy GAN Domain Adaptation explained: Sketch Your Own GAN by Sheng-Yu Wang et al. 5 minute summary

Sketch Your Own GAN

Want to quickly train an entire GAN that generates realistic images from just two quick sketches done by hand? Heng-Yu Wang and team got you covered! They propose a new method to fine-tune a GAN to a small set of user-provided sketches that determine the shapes and poses of the objects on the synthesized images. They use domain adversarial loss and different regularization methods to preserve the original model's diversity and image quality.

The authors motivate the necessity of their approach mainly with the fact that training conditional GANs from scratch is simply a lot of work: you need powerful GPUs, annotated data, careful alignment, and pre-processing. In order for an end-user to generate images of a cats in a specific pose a very large number of such images is required, however with the proposed approach only a couple of sketches and a pretrained GAN is needed to create a new GAN that synthesizes images resembling the shape and orientation of sketches, and retains the diversity and quality of the original model. The resulting models can be used for random sampling, latent space interpolation and photo editing.

Read the full paper digest or the blog post (reading time ~5 minutes) to learn about Cross-Domain Adversarial Learning, how Image Space Regularization helps improve the results, and what optimization targets are used in Sketch Your Own GAN.

Meanwhile, check out the paper digest poster by Casual GAN Papers!

Sketch Your Own GAN explained

[Full Explanation/ Blog Post] [Arxiv] [Code]

More recent popular computer vision paper breakdowns:

>[3D-Inpainting]
>
>[Real-ESRGAN]
>
>[SupCon]

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Aug 14 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.