目的
単なる用語の話です。
「転移学習」と「fine-tuning」の違いについて考えました。
結論
その前に
いろんな立場の方がいると思います。
- そもそも、言葉の多少の違いに興味がない人
- 転移学習とfine-tuningで区別ができると思っている人
(xxxは、全結合層以外は重みを固定して再学習するなど。) - その他
ワタシの理解
「転移学習」は、別途、学習したものを、別のところで使う意で、
「fine-tuning」は、特に、なんでしょ? **「更に学習する」**ぐらいの意か?
以下の文面とか、参考になるでしょうか。
出典:https://www.quora.com/What-is-the-difference-between-transfer-learning-and-fine-tuning
The terms transfer learning and fine-tuning refer to two concepts that are very similar in many ways, and the two terms are being widely used almost interchangeably. The two terms don’t imply the same goal or motivation, but they still refer to a similar concept. What I mean by “similar concept” is this: Fine-tuning means taking some machine learning model that has already learned something before (i.e. been trained on some data) and then training that model (i.e. training it some more, possibly on different data). That’s all fine-tuning means. Some other answers put arbitrary, incorrect limitations on the term, for example claiming that it’s only called fine-tuning if it refers to the final stages of the training. None of these limitations bear any substance. Now, transfer learning means to apply the knowledge that some machine learning model holds (represented by its learned parameters) to a new (but in some way related) task. This should already look quite familiar to you to the concept of fine-tuning defined above, but in case it doesn’t yet, let’s put more abstractly what you actually do when you perform transfer learning: You take a model that has been trained on something and you use (part or all of) this model as (part or all of) a new model and train it on new data (i.e. train it some more). The last part of the previous sentence is what is meant by “applying” the trained model to a new task. In summary, it would be wrong to say that fine-tuning and transfer learning are the exact same thing, but as explained above they both refer to the concept of taking an existing, trained model and training it further, either as is or as part of a new model.
The main reason why people use the term fine-tuning at all is simply to indicate the training of a machine learning model that is not being trained from scratch, but has already been trained before on some data (not necessarily to convergence). It’s just a convenient way to express that you are not training from scratch. Whether you train on the same or new data, and for the same or a new task, is a different story, the term fine-tuning just in itself contains no implications on any of that.
今後
特にありません。