structure of paper (important)
1. Introduction
- context
- research gap
- aim of the study
2. Material and Methods
-
study object, materials, participants
-
preparation of objects, or materials, or selection of participants
-
study design
-
interventions (e.g experiments)
-
methods of measurement, calculations
-
statistical analysis
3. Results
-
most important or first result
-
other results in specific order
-
least important or last result
4. Discusion
-
statement of main result
-
expected results
-
comparision with the literature
-
explanations (of results)
-
limitations (of methoddology)
-
generalizability
-
conclusion
Việc hướng dẫn sinh viên để tham gia nckh
-
tên đề tài
-
tài liệu tham khảo
-
viết review (abstract hay liệt kê các method)
-
cài đặt, so sánh
-
các contribution (có thể fe, preprocessing, …, chia nhỏ, pretrained)
-
viết nháp theo format (latex) và trình bày slide
Luu y
Dùng để scale hình ảnh
1
2
3
4
5
6
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{images/fig1.pdf}
\caption{The main flow of the study}
\label{fig1}
\end{figure}
Dùng để vẽ bảng cho mượt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
\begin{table}[htbp]
\caption{The sacre BLEU of each beam size in mBART based on greedy search}
\begin{center}
\begin{tabular}{cccc}
\hline & & \multicolumn{2}{c}{ SacreBLEU } \\
\cline { 3 - 4 } Experiment & Size & $\begin{array}{c}\text { BLEU } \\
\text { score }\end{array}$ & $\begin{array}{c}\text { 1-/2-/3-/4-gram} \\
\text { score }\end{array}$ \\
\hline 1 & 1 & 32.69 & $\mathbf{64.3} / \mathbf{41.3} / 29.0 / 20.9$ \\
2 & 2 & 33.49 & $63.7 / 41.1 / 29.1 / 21.1$ \\
3 & 4 & 33.91 & $63.0 / 40.8 / 29.1 / 21.2$ \\
4 & 6 & 34.16 & $62.8 / 40.8 / 29.1 / 21.3$ \\
5 & 8 & 34.33 & $62.8 / 41.0 / \mathbf{29.3} / 21.4$ \\
6 & 10 & $\mathbf{34.41}$ & $62.7 / 41.0 / \mathbf{29.3} / \mathbf{21.5}$ \\
7 & 12 & 34.39 & $62.7 / 41.0 / \mathbf{29.3} / 21.4$ \\
8 & 14 & 34.28 & $62.5 / 40.7 / 29.1 / 21.3$ \\
9 & 16 & 34.14 & $62.4 / 40.6 / 28.9 / 21.2$ \\
10 & 18 & 34.21 & $62.3 / 40.6 / 29.0 / 21.2$ \\
\hline
\end{tabular}
\label{tab2}
\end{center}
\end{table}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
\begin{table}[htbp]
\caption{The sacre BLEU of each beam size in mBART based on greedy search}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline { Experiment } & { Size } & \multicolumn{2}{|c|}{ SacreBLEU } \\
\cline { 3 - 4 } & & $\begin{array}{c}\text { BLEU } \\
\text { score }\end{array}$ & $\begin{array}{c}1 / 2 / 3 / 4 \text {-gram } \\
\text { scores }\end{array}$ \\
\hline 1 & 1 & 32.69 & $\mathbf{64.3} / \mathbf{41.3} / 29.0 / 20.9$ \\
\hline 2 & 2 & 33.49 & $63.7 / 41.1 / 29.1 / 21.1$ \\
\hline 3 & 4 & 33.91 & $63.0 / 40.8 / 29.1 / 21.2$ \\
\hline 4 & 6 & 34.16 & $62.8 / 40.8 / 29.1 / 21.3$ \\
\hline 5 & 8 & 34.33 & $62.8 / 41.0 / \mathbf{29.3} / 21.4$ \\
\hline 6 & 10 & $\mathbf{3 4 . 4 1}$ & $62.7 / 41.0 / \mathbf{2 9 . 3 / 2 1 . 5}$ \\
\hline 7 & 12 & 34.39 & $62.7 / 41.0 / \mathbf{2 9 . 3} / 21.4$ \\
\hline 8 & 14 & 34.28 & $62.5 / 40.7 / 29.1 / 21.3$ \\
\hline 9 & 16 & 34.14 & $62.4 / 40.6 / 28.9 / 21.2$ \\
\hline 10 & 18 & 34.21 & $62.3 / 40.6 / 29.0 / 21.2$ \\
\hline
\end{tabular}
\label{tab2}
}
\end{table}
Lỗi khi submit lên hệ thống edas (format ieee)
-
sidemargins: Upload failed: The right margin is 0.68 in on page (1->6) (widths: 7.14, …., in) which is below the required margin of “1 in” for letter-sized pager.
-
topmargins: Upload failed: The top margin is “0.72 in” on pages 2,3,4,5,6, which is below the required margin of 0.75
Cách sửa
1
2
3
4
\usepackage{geometry}
\geometry{top=0.75in}
\geometry{right=1in}
Viết báo thì cần lưu ý và nhớ rằng:
-
Cần phải có nội dung, sau đó mới có bảng hoặc hình ảnh xuất hiện ở phía dưới. (rat quan trong)
-
Dùng pdf để load file dữ liệu
-
NÓI HOÀI VẪN KHÔNG SỬA: Thảo luận về kết quả chủ yếu mô tả và không phân tích sâu hoặc giải thích kết quả. Hơn nữa, thiếu hỗ trợ trực quan để minh họa kết quả. —> giải thích sâu hơn về kết quả, giải thích tại sao chúng xảy ra và ý nghĩa của chúng trong ngữ cảnh của mục tiêu nghiên cứu.
Dung bib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
@article{bib1,
title={Neural machine translation by jointly learning to align and translate},
author={Bahdanau, Dzmitry and Cho, Kyunghyun and Bengio, Yoshua},
journal={arXiv preprint arXiv:1409.0473},
year={2014}
}
@article{bib2,
title={Machine Translation Systems Based on Classical-Statistical-Deep-Learning Approaches},
author={Sharma, Sonali and Diwakar, Manoj and Singh, Prabhishek and Singh, Vijendra and Kadry, Seifedine and Kim, Jungeun},
journal={Electronics},
volume={12},
number={7},
pages={1716},
year={2023},
publisher={MDPI}
}
@article{bib3,
title={Neural machine translation: past, present, and future},
author={Mohamed, Shereen A and Elsayed, Ashraf A and Hassan, YF and Abdou, Mohamed A},
journal={Neural Computing and Applications},
volume={33},
pages={15919--15931},
year={2021},
publisher={Springer}
}
@article{bib4,
title={Recent advances in natural language processing via large pre-trained language models: A survey},
author={Min, Bonan and Ross, Hayley and Sulem, Elior and Veyseh, Amir Pouran Ben and Nguyen, Thien Huu and Sainz, Oscar and Agirre, Eneko and Heintz, Ilana and Roth, Dan},
journal={ACM Computing Surveys},
year={2021},
publisher={ACM New York, NY}
}
@article{bib5,
title={BPE-dropout: Simple and effective subword regularization},
author={Provilkov, Ivan and Emelianenko, Dmitrii and Voita, Elena},
journal={arXiv preprint arXiv:1910.13267},
year={2019}
}
@article{bib6,
title={Transformers without tears: Improving the normalization of self-attention},
author={Nguyen, Toan Q and Salazar, Julian},
journal={arXiv preprint arXiv:1910.05895},
year={2019}
}
@article{bib7,
title={Understanding and improving layer normalization},
author={Xu, Jingjing and Sun, Xu and Zhang, Zhiyuan and Zhao, Guangxiang and Lin, Junyang},
journal={Advances in Neural Information Processing Systems},
volume={32},
year={2019}
}
@article{bib8,
title={Learning when to concentrate or divert attention: Self-adaptive attention temperature for neural machine translation},
author={Lin, Junyang and Sun, Xu and Ren, Xuancheng and Li, Muyu and Su, Qi},
journal={arXiv preprint arXiv:1808.07374},
year={2018}
}
@article{bib9,
title={Syntax-enhanced neural machine translation with syntax-aware word representations},
author={Zhang, Meishan and Li, Zhenghua and Fu, Guohong and Zhang, Min},
journal={arXiv preprint arXiv:1905.02878},
year={2019}
}
@article{bib10,
title={Deconvolution-based global decoding for neural machine translation},
author={Lin, Junyang and Sun, Xu and Ren, Xuancheng and Ma, Shuming and Su, Jinsong and Su, Qi},
journal={arXiv preprint arXiv:1806.03692},
year={2018}
}
@inproceedings{bib11,
title={Stanford neural machine translation systems for spoken language domains},
author={Luong, Minh-Thang and Manning, Christopher D},
booktitle={Proceedings of the 12th International Workshop on Spoken Language Translation: Evaluation Campaign},
pages={76--79},
year={2015}
}
@inproceedings{bib12,
title={The IWSLT 2016 evaluation campaign},
author={Cettolo, Mauro and Niehues, Jan and St{\"u}ker, Sebastian and Bentivogli, Luisa and Cattoni, Rolando and Federico, Marcello},
booktitle={Proceedings of the 13th International Conference on Spoken Language Translation},
year={2016}
}
@article{bib13,
title={Attention is all you need},
author={Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia},
journal={Advances in neural information processing systems},
volume={30},
year={2017}
}
@article{bib14,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{bib15,
title={Dialogpt: Large-scale generative pre-training for conversational response generation},
author={Zhang, Yizhe and Sun, Siqi and Galley, Michel and Chen, Yen-Chun and Brockett, Chris and Gao, Xiang and Gao, Jianfeng and Liu, Jingjing and Dolan, Bill},
journal={arXiv preprint arXiv:1911.00536},
year={2019}
}
@article{bib16,
title={Multilingual denoising pre-training for neural machine translation},
author={Liu, Yinhan and Gu, Jiatao and Goyal, Naman and Li, Xian and Edunov, Sergey and Ghazvininejad, Marjan and Lewis, Mike and Zettlemoyer, Luke},
journal={Transactions of the Association for Computational Linguistics},
volume={8},
pages={726--742},
year={2020},
publisher={MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info~…}
}
@article{bib17,
title={A call for clarity in reporting BLEU scores},
author={Post, Matt},
journal={arXiv preprint arXiv:1804.08771},
year={2018}
}
Load
1
2
3
4
5
##\bibliographystyle{plain}
\bibliographystyle{ieeetr}
\bibliography{references}
Can chu y format (ieee hoac springer)
Khi viết báo cần lưu ý:
- bảng họ vẽ thế nào?
- chỗ keyword họ viết thế nào?
- chỗ header, footer họ viết thế nào?
- phần ref viết thế nào?
- các hình hiển thị lên, được thế kế chi tiết ra sao?
promt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
Impulse noise [16], Quantization noise [17] and Impulse noise [18]
Enhancing Robotic Vision: A Comparative Analysis of Diffusion-Based Super-Resolution Techniques
the title of paper: Enhancing Robotic Vision: A Comparative Analysis of Diffusion-Based Super-Resolution Techniques
write 3 paragrah to:
Comparison of SSIM and PSNR when applying latent diffusion model (LDM) and Image Super-Resolution via Iterative Refinement (ISRR) models in the case of using data compressed by SD (stable diffusion) compression is shown in Table 1.
The results show that the SSIM value of the ISRR model is better and achieved is 0.854, while for the LDM model, the achieved value is 0.752.
Please discuss and evaluate it?
The results show that the PSNR value of the ISRR model is better and achieved is 27,888, while for the LDM model, the achieved value is 26,574
Please discuss and evaluate it?
=======
The value of SSIM measures of LDM model for images compressed by SD compression with types of noise is shown in Fig 3
When used with impulse noise, the result is 0.631, when used with poisson noise, the lowest result is about 0.549 and when used with quantization noise, the highest result is obtained with 0.653
Please discuss and evaluate it?
====
The value of PSNR measures of LDM model for images compressed by SD compression with types of noise is shown in Fig 4
When used with quantization noise, the result is 23.665, when used with impulse noise, the lowest result is about 20.901 and when used with poisson noise, the highest result is obtained with 25.639
Please discuss and evaluate it?
================
viết conclusion cho paper:
Enhancing Robotic Vision: A Comparative Analysis of Diffusion-Based Super-Resolution Techniques
với kết quả:
The results show that the SSIM value of the ISRR model is better and achieved is 0.854, while for the LDM model, the achieved value is 0.752.
The results show that the PSNR value of the ISRR model is better and achieved is 27.888, while for the LDM model, the achieved value is 26.574
kết luận cho thấy mô hình ISRR đạt hiệu quả tốt hơn trong bài toán high resolution với dữ liệu camera được thu thập từ robotics.
===========
viết 1 paragraph abstract cho paper:
Enhancing Robotic Vision: A Comparative Analysis of Diffusion-Based Super-Resolution Techniques
với kết quả:
The results show that the SSIM value of the ISRR model is better and achieved is 0.854, while for the LDM model, the achieved value is 0.752.
The results show that the PSNR value of the ISRR model is better and achieved is 27.888, while for the LDM model, the achieved value is 26.574
kết luận cho thấy mô hình ISRR đạt hiệu quả tốt hơn trong bài toán high resolution với dữ liệu camera được thu thập từ robotics.
Đôi với phần abstract không được viết tắt (LDM) hoặc gì đó bất kỳ, trừ khi nó là cụm từ phổ biến như các tham số SSIM hoặc PSNR.
Ngoài ra, có thể viết được thế này.
latent diffusion models (LDM) and iteratively refined ultra-high resolution image models (ISRR)
Tài liệu tham khảo
Internet
Hết.