| Pretraživanje | Osvježi |
|
Ming-Yu Liu
@liu_mingyu
|
19. lip |
|
We have released our #pix2pixHD under the BSD license
github.com/NVIDIA/pix2pix…
(It was under CC non-commercial license.)
Please feel free to use it.
|
||
|
|
||
|
William Ngan
@williamngan
|
21. sij |
|
Recently I put together a small dataset of Bentley's classic snowflake photos, and trained a #pix2pixhd model.
Here I generated some shapes with Pts.js and ran them through the model. Initial results --
(I probably need to train it some more but it's so expensive! 😭) pic.twitter.com/djTFLNvUbv
|
||
|
|
||
|
Mario Klingemann
@quasimondo
|
21. sij 2019. |
|
I've created an experimental GAN architecture I call #RecuResGAN or "Recursive-Residual GAN" and I am pretty astonished that:
- it works at all
- how well it works across a pretty wide range of scales.
- it is just 15% the size of a comparable #pix2pixHD model pic.twitter.com/LCUoB2J7ql
|
||
|
||
|
Ming-Yu Liu
@liu_mingyu
|
15. ožu |
|
Glad to see that our #GAN research works enable people to "generate realistic dance videos of NBA players for in-game entertainment."
#pix2pixHD, #vid2vid
medium.com/@getxpire/how-… pic.twitter.com/Xmd9rLk35A
|
||
|
|
||
|
samim
@samim
|
1. pro 2017. |
|
The #pix2pixHD work by @berkeley_ai & @NvidiaAI is a hint at the future of design tools:
youtube.com/watch?v=3AIpPl… Code coming soon: tcwang0509.github.io/pix2pixHD/
|
||
|
|
||
|
Christian Mio Loclair
@Mio_Loclair
|
21. sij 2019. |
|
Research at the lab | turning video games into interactive mode to question the value of designed pixels in future graphical productions | This is #RDR2Online controlled with a body #ai #BigGAN #pix2pixHD pic.twitter.com/DrwnxjU6Jb
|
||
|
|
||
|
Nono Martínez Alonso · Nono.MA ★
@nonoesp
|
16. lis 2018. |
|
#pix2pixHD by @NVIDIA "Synthesizing and manipulating 2048x1024 images with conditional GANs" tcwang0509.github.io/pix2pixHD/ pic.twitter.com/AfHsOnDzai
|
||
|
|
||
|
William Ngan
@williamngan
|
25. sij |
|
1. Draw shapes in #procreate, by hand
2. Run with #pix2pixhd, fingers crossed 🤞 pic.twitter.com/3s9jUxt148
|
||
|
|
||
|
Mario Klingemann
@quasimondo
|
31. pro |
|
By the end of 2017 my efforts to improve resolution were obliterated by two major breakthroughs: in short succession @nvidia first showed their highly realistic celebrities made with #PGAN and followed up with #pix2pixHD shortly after.
twitter.com/quasimondo/sta…
|
||
|
|
||
|
William Ngan
@williamngan
|
19. pro |
|
Some generated snow crystals from #pix2pixHD GAN, based on Wilson Bentley's classic photos.
Training is still in progress, slowly slowly. More to come soon! pic.twitter.com/Ylf0H2NeNj
|
||
|
||
|
A m m a r Ul H a s s a n
@ammarulhassan2
|
21. ruj |
|
I personally think if this image is converted into segment ( how pix2pixhd works ) then i m sure var won’t make these types of errors. We need image to outline based concept here. @facebookai @NvidiaAI #pix2pixhd @premierleague twitter.com/JDNalton/statu…
|
||
|
|
||
|
Mario Klingemann
@quasimondo
|
8. srp |
|
Both models are shallow ResNets derived from #Pix2PixHD. I first tried #pix2pix UNets, but there the models learned to cheat very quickly and just abused the first skip connection to pass the information almost uncompressed.
|
||
|
|
||
|
MEMO M̸e̵h̶m̸e̸t̶ ̴S̴e̷l̴i̷m̸ ̵A̸k̶t̴e̷n̶
@memotv
|
20. ožu |
|
Thx 🙏 :). (To complete the loop, this 👇 is based on #pix2pix, coauthored by @junyanz89 , also coauthor on @NvidiaAI #spade. & other #spade coauthors #TaesungPark @liu_mingyu @tcwang0509 are from #cyclegan #pix2pixhd #vid2vid). twitter.com/artnome/status…
|
||
|
|
||
|
Kyle Steinfeld
@ksteinfe
|
1. ožu |
|
Some more developments in a series that extends work exhibited at #neurips4creativity #almosthome #gan #pix2pix #pix2pixhd 1/many pic.twitter.com/7OF5XgLdJN
|
||
|
||
|
Kyle Steinfeld
@ksteinfe
|
5. velj |
|
Some more initial results from a series that extends work exhibited at #neurips4creativity. Using streetview data, we trained a #pix2pixhd model to transform depthmap images to photographic images. pic.twitter.com/90Fyl8k06G
|
||
|
||
|
Mario Klingemann
@quasimondo
|
21. sij 2019. |
|
The principle is pretty simple: in a classic residual architecture you chain several residual blocks behind each other (in #pix2pixHD the default is 9 blocks), what I do in #RecuResGAN is to use a single block, but loop 9 times over it, feeding its output back into its input.
|
||
|
|
||
|
JC Testud
@jctestud
|
7. stu 2018. |
|
|
||
|
|
||
|
Mario Klingemann
@quasimondo
|
27. lis 2018. |
|
Thanks! There are 5 different GANs involved which employ my own architecture that owes a lot to #pix2pix and #pix2pixHD.
|
||
|
|
||
|
hans
@wavefunk_
|
27. kol 2018. |
|
Hopping on the #pix2pixHD frame prediction train. Here's a feedback loop between 2 models, one trained to predict the next frame in a video, the other trained to predict the original frame from a version processed by the other model between 1 and 10 times. pic.twitter.com/kRTO57n8oh
|
||
|
|
||
|
Mario Klingemann
@quasimondo
|
23. kol 2018. |
|
These are some snapshots from the training of my custom version of #pix2pixHD using #DensePose on the Costică Acsinte archive.
|
||
|
|
||