๐ 5์ฃผ์ฐจ ๋ฐํ ๋ด์ฉ
๐ Sequence to sequence learning with neural networks
๐ ํ์ฃผ
- ๋งํฌ: https://arxiv.org/pdf/1409.3215.pdf
- ์ฃผ์ : ๋ ๊ฐ์ LSTM ๋ชจ๋ธ์ ํ์ฉํ ๊ธฐ์กด ๊ธฐ๊ณ ๋ฒ์ญ์ ํ๊ณ ๊ฐ์
- ๋ฐฐ๊ฒฝ: ๊ธฐ์กด์ SMT์ RNN ๋ฐฉ์์ผ๋ก๋ ๊ธด ๋ฌธ์ฅ ์ฒ๋ฆฌ์ ์ด์ ๊ตฌ์กฐ ๋ณํ์ ๋์ํ๊ธฐ ์ด๋ ต๋ค๋ ๋จ์ ์ด ์กด์ฌ
- ๋ด์ฉ
- Encoder์ Decoder, ์ฆ 2๊ฐ์ LSTM์ ์ฌ์ฉํ๊ณ ์ด๋ฅผ ๋ค์ 4๊ฐ์ layer๋ก ์์ ๋ชจ๋ธ ์์ฑ => ํ๋ผ๋ฏธํฐ์ ์๊ฐ ๋์ด๋ ๊น์ ํ์ต ๊ฐ๋ฅ
- ์ ๋ ฅ ๋ฌธ์ฅ์ ์์๋ฅผ ๋ค์ง์์ ๋ ๊ฒฐ๊ณผ๊ฐ ๋ ์ข์
- ๋ฐํ์๋ฃ: https://github.com/dsc-sookmyung/2021-DeepSleep-Paper-Review/blob/main/Week5/SeqToSeq.md
๐ ์ดํ ์คํฐ๋ ๊ณํ
๋ฅ๋ฌ๋ ๊ธฐ๋ฐ ๊ธฐ๊ณ๋ฒ์ญ ๋ฐ์ ์ ํต์ฌ ๋ ผ๋ฌธ๋ค์ ๋ฆฌ๋ทฐํ์!
๐ 8/8 - Attention
๐ ํ์ฃผ
- ๐ Neural Machine Translation by Jointly Learning to Align and Translate
- ๋งํฌ: https://arxiv.org/pdf/1409.0473v7.pdf
๐ ๋์ฐ
- ๐ Effective approaches to attention-based neural machine translation
- ๋งํฌ: https://arxiv.org/pdf/1508.04025.pdf
๐ 8/15 - Transformer
๐ ์์ฐ
- ๐ Attention Is All You Need
- ๋งํฌ: https://arxiv.org/pdf/1706.03762.pdf
๐ 8/22 - GPT / BERT
๐ ํฌ
- ๐ Improving Language Understanding by Generative Pre-Training
- ๋งํฌ: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
๐ ํ๋
- ๐ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- ๋งํฌ: https://arxiv.org/pdf/1810.04805.pdf
'Group Study (2020-2021) > Deep Learning' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
[DeepSleep] ๋ ผ๋ฌธ ๋ฆฌ๋ทฐ ์คํฐ๋ 7์ฃผ์ฐจ (0) | 2021.08.17 |
---|---|
[DeepSleep] ๋ ผ๋ฌธ ๋ฆฌ๋ทฐ ์คํฐ๋ 6์ฃผ์ฐจ (0) | 2021.08.16 |
[DeepSleep] ๋ ผ๋ฌธ ๋ฆฌ๋ทฐ ์คํฐ๋ 4์ฃผ์ฐจ (0) | 2021.07.26 |
[DeepSleep] ๋ ผ๋ฌธ ๋ฆฌ๋ทฐ ์คํฐ๋ 3์ฃผ์ฐจ (0) | 2021.07.19 |
[DeepSleep] ๋ ผ๋ฌธ ๋ฆฌ๋ทฐ ์คํฐ๋ 2์ฃผ์ฐจ (0) | 2021.07.12 |