Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the initial screen has no angle? #2

Open
Zhangwenbo0324 opened this issue Jan 7, 2021 · 0 comments
Open

How to get the initial screen has no angle? #2

Zhangwenbo0324 opened this issue Jan 7, 2021 · 0 comments

Comments

@Zhangwenbo0324
Copy link

Hi,
Thank you for your code on Harlow experiment and I've learn a lot from it. In your environment settings, the action space is reduced and the agent only does one action per trial, the reverse action to come back to the center of the screen is automatic in the wrapped environment. When I reset the environment, I'll get the initial screen with an angle like this:
Figure_1
And only if the screen rotate to has no angle,
Figure_2

and continue TIME_TO_FIXATE_CROSS, the two picture can be seen.
Figure_3
But in you function 'work' in the script /meta_rl/worker.py,

 for _ in range(1):
      # to optimize for GPU, update on large batches of episodes
      d = False
      r = 0
      a = 0
      t = 0
      s = self.env.reset()
      # Allow us to remove noise when starting episode
      for i in range(5):
        _, r_, _, _ = self.env.step(np.array([0, 0, 0, 0, 0, 0, 0], dtype=np.intc))

      start_time = time.time()
      while d == False:
        #Take an action using probabilities from policy network output.
        a_dist,v,rnn_state_new = sess.run([self.local_AC.policy,self.local_AC.value,self.local_AC.state_out],
          feed_dict={
          self.local_AC.state:[s],
          self.local_AC.prev_rewards:[[r]],
          self.local_AC.timestep:[[t]],
          self.local_AC.prev_actions:[a],
          self.local_AC.state_in[0]:rnn_state[0],
          self.local_AC.state_in[1]:rnn_state[1]})

        a = np.random.choice(a_dist[0],p=a_dist[0])
        a = np.argmax(a_dist == a)
        rnn_state = rnn_state_new
        action = deepmind_action_api(a)

        """Objectif: Reduce action space to speed up training time

        1st Action: No-Op, wait 1 frame to allow pictures to appears
        2nd Action: True Action taken
        3rd Action: Reverse action to go back at the center of the screen
        4th Action: No-Op, to wait 1 frame to allow the cross to appears
        """
        _, r_, _, _ = self.env.step(np.array([0, 0, 0, 0, 0, 0, 0], dtype=np.intc))
        s1, r, d, t = self.env.step(action, True)
        r += r_
        if not d:
            _, r_, d, _ = self.env.step(-action)
            r += r_
            if not d:
                _, r_, d, _ = self.env.step(np.array([0, 0, 0, 0, 0, 0, 0], dtype=np.intc))
                r += r_

        episode_buffer.append([s,a,r,t,d,v[0,0]])
        episode_values.append(v[0,0])
        episode_reward += r
        total_steps += 1
        episode_step_count += 1
        s = s1

You get the env reset first and you take 5 step zero action( self.env.step(np.array([0, 0, 0, 0, 0, 0, 0], dtype=np.intc)) ). Then you use 1 step zero action(No-Op, wait 1 frame to allow pictures to appears). After this, the true action is taken. But when you reset your env, you'll get the original screen with a certain angle. And your beginning actions is 5+1=6 No-Op, and the screen is still same as the original one. In this way, the two picture will not appear in the screen. In my opinion, we can only take the true action after the picture is shown. So there is a request that the initial screen has no angle when you reset the env. And how can I guarantee this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant