We have released the source code of the prize winners as follows.
Quantitative Evaluation 1st&Qualitative Evaluation Winner: FYSignate1009
Quantitative Evaluation 2nd&Qualitative Evaluation Winner: Petr(CZ)
Quantitative Evaluation 3rd: askbox
Qualitative Evaluation Winner: Kot
Notice
[2024/03/18] The video of the award ceremony has been released. https://youtu.be/CJgVi_I_i1k[2024/01/11] The submission deadline has been extended to 1/12(JST).
[2023/12/12] Submission is now available.
Objective
While early social implementation of artificial intelligence technology is expected, technology that accelerates the introduction of artificial intelligence technology is crucial. When introducing artificial intelligence technology there is an issue in that it takes a tremendous amount of time to tune establishment and studying of artificial intelligence models. In order to solve this problem, NEDO has a project called “Development of integration technology that becomes the core of the next-generation artificial intelligence and robots/ Research and development that expands the areas of application of artificial intelligence technology”: As a common fundamental technology that promotes early social implementation of artificial intelligence technology, the project develops technology that shortens the time to tune establishment and studying of the artificial intelligence models as well as an artificial intelligence acceleration modules using the technology. Specifically, within the R&D themes of the above project, "Research and Development of Technology to Construct Cyber Physical Value Chains and Accelerate the Introduction of AI" and "Research and Development of Technology to Accelerate the Introduction of Artificial Intelligence Technology through Automatic Machine Learning," we have developed technologies such as the search for optimal hyperparameter (Hyperparameter Optimization, hereafter HPO) and the search for optimal neural network structures (Neural Architecture Search). The results have been compiled into an open source software called aiaccel, which is being released to the public in turn.
In order to further promote this project it is essential to induce technological development in a competitive environment which is joined by a wide range of participants including universities, corporations and overseas researchers. Therefore, we decided to hold an AI introduction acceleration module competition which will induce the creation of AI introduction acceleration modules that display excellent individual performance and versatility.
The first HPO Module Contest was held in FY2022, and despite the short duration of the contest, we were able to make it a fulfilling contest with the participation of many people. We would like to take this opportunity to once again thank everyone who participated.
In this second session, we will develop a module to mechanically generate pre-training datasets with mathematical formulas, algorithms, etc. to improve the performance of transfer learning. In research and development of AI, including the above research topics, it is essential to prepare good quality image datasets. However, in general, the construction of natural image datasets necessary for AI training requires a great deal of effort in image collection and labeling. In addition, the use of existing natural image datasets has challenges, such as restrictions on commercial use and the possibility that they may contain copyrighted images that cannot be used. Reference [1] reported that pre-training on a mechanically generated image dataset such as fractal images, combined with transfer learning, can ensure the same discrimination accuracy as when training on a natural image dataset. If such image datasets can be used for pre-training, it is expected to solve the issues related to the use of natural image datasets, and we look forward to your challenge as this is an excellent opportunity to try to develop a potentially revolutionary learning process for AI.
概要
Final Round | |
Task | Quantitative evaluation: Development of modules for generating pre-training datasets Qualitative evaluation: Prepare and submit a report explaining the ingenious features of the created module. Both submissions are required. |
Requirement | Developing a module that works with SIGNATE server. |
Eligibility | By the end date, prepare and submit a report explaining the ingenious features of the created module.Must add and publish the open-source license to the source code if awarded with a prize.The winners will be asked to make a presentation based on the above report at the awards ceremony (scheduled to be held in March 2024). |
Assessment Method | Quantitative evaluation: Evaluated by recognition accuracy for a number of tasks (1000 classes) that are subject to transfer learning. Qualitative evaluation: Evaluation of submitted reports by a panel of judges. |
Prize | Quantitative evaluation: 1st: ¥1,200,000 2nd: ¥1,000,000 3rd: ¥800,000 Qualitative evaluation: ¥500,000 × 2 teams |
Tasks for the Final Round
Performance(Quantitative evaluation) is evaluated in terms of recognition accuracy for a number of tasks (1000 classes) that are subject to transfer learning.Report(Qualitative evaluation) is evaluated in terms of the novelty and superiority of the methodology.
To be a prize winner (win a prize), you must both submit a module and a report. Please note that failure to submit a report will disqualify you from winning either the quantitative or qualitative evaluation.
*See Data tab for details.
Test Environment
Participants will make submission materials with only your owned computation resources.