在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):kimbring2/minecraft_ai开源软件地址(OpenSource Url):https://github.com/kimbring2/minecraft_ai开源编程语言(OpenSource Language):Python 70.4%开源软件介绍(OpenSource Introduction):IntroductionCode for playing the Minecraft using the Deep Learning. Normal Dependencies
Python Dependencies
Reference
Action, Observation of MinecraftModel ArchitectureLearning-Based Model ArchitectureRule-Based Model ArchitectureLoss for TrainingTraining MethodRun Supervised LearningFor Minecraft games, agent can not learn every behaviour for high level playing only using Reinforcment Learning becaue of complexity of task. In such cases, the agent must first learn through human expert data. Try to train network for MineRLTreechop-v0 first using below command.
The loss should fall to near 0 as shown like below graph. Model is saved under folder named model of workspace path. You can download the weight of trained SL model from Google Drive. Try to use 'tree_supervised_model_15800' file. After finishing training, you can test trained model using below command.
Run Reinforcement LearningBecause of long game play time, normal A2C method can not be used because it should use whole episode once. Therefore, off-policy A2C such as IMPALA is needed. It can restore trajectory data from buffer for training like a DQN. You can run the IMPALA with Supervised model for MineRL by below command.
You can ignore below error of learner.py part. It does not effect the training process.
After some training, the agent starts to collect tree and earn rewards as shown in the graph below. You can download the weight of trained RL model from Google Drive. Try to use 'tree_reinforcement_model_128000' file. Below video is evluation result of trained agent. Detailed inforamtion
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论