# Import tobii_research library
import tobii_research as tr
# Find all connected eye trackers
= tr.find_all_eyetrackers()
found_eyetrackers
# We will just use the first one
= found_eyetrackers[0] Eyetracker
Create an eye-tracking experiment
PsychoPy, Python, eye-tracking, tobii, tobii_research, experimental psychology, tutorial, experiment, DevStart, developmental science
This page will show you how to collect eye-tracking data in a simple Psychopy paradigm. We will use the same paradigm that we built together in the Getting started with Psychopy tutorial. If you have not done that tutorial yet, please go through it first.
Tobii eye-tracker
Note that this tutorial is specific for using Tobii eye-trackers. The general steps and idea are obviously applicable to other eye-trackers, but the specific code and packages may vary.
Our Approach
The method we’ll show you here is designed to be simple and functional rather than the most efficient or sophisticated approach possible. There are more advanced techniques for handling eye tracking data collection, buffer management, and event synchronization, but we’ve prioritized code that’s easy to understand and modify.
Our goal is to get you up and running with a working eye tracking experiment that you can build upon. Once you’re comfortable with these basics, you can always optimize and refine your approach for more demanding applications.
Tobii_sdk
To start, we will look into how to connect and talk to our Tobii eyetracker with the SDK that Tobii provides. An SDK is a collection of tools and programs for developing applications for a specific platform or device. We will use the Python Tobii SDK that lets us easily find and get data from our Tobii eye tracker.
Install
To install the Python Tobii SDK, we can simply run this command in our conda terminal:
pip install tobii_research
Compatibility older eye-trackers
If you’re using an older Tobii eye-tracker, check the compatibility page to see which version of tobii_research you need for your specific model. Check it here.
Great! We have installed the Tobii SDK.
Connect to the eye-tracker
So how does this library work, how do we connect to the eye-tracker and collect our data? Very good questions!
The tobii_research
documentation is quite extensive and describes in detail a lot of functions and data classes that are very useful. However, we don’t need much to start our experiment.
First we need to identify all the eye trackers connected to our computer. Yes, plural, tobii_research
will return a list of all the eye trackers connected to our computer. 99.99999999% of the time you will only have 1 eye tracker connected, so we can just select the first (and usually only) eye tracker found.
Perfect!! We have identified our eye-trackers, and we have selected the first one (and only).
We are now ready to use our eye-tracker to collect some data… but how?
Collect data
Tobii_research has a cool way of telling us what data we are collecting at each time point. It uses a callback function. What is a callback function, you ask? It is a function that tobii runs each time it has a new data point. Let’s say we have an eye tracker that collects data at 300Hz (300 samples per second): the function will be called every time the tobii has one of those 300 samples.
This callback function will give us a gaze_data
dictionary. This dictionary contains multiple information of that collected sample
Here is our callback function:
# callback function
def gaze_data_callback(gaze_data):
print(gaze_data)
This will print the entire dictionary to your console. Here’s what you’ll see:
'device_time_stamp': 467525500,
{'system_time_stamp': 6405415231,
'left_gaze_point_on_display_area': (0.4633694291, 0.4872185290),
'left_gaze_point_in_user_coordinate_system': (-10.15791702, 128.29026794, 40.876254),
'left_gaze_point_validity': 1,
'left_pupil_diameter': 5.655228,
'left_pupil_validity': 1,
'left_gaze_origin_in_user_coordinate_system': (-25.86829758, 1.41938722, 644.839478),
'left_gaze_origin_in_trackbox_coordinate_system': (0.561557, 0.481128, 0.489121),
'left_gaze_origin_validity': 1,
'right_gaze_point_on_display_area': (0.4944303632, 0.4498708546),
'right_gaze_point_in_user_coordinate_system': (0.7905667424, 135.2486572266, 43.373546),
'right_gaze_point_validity': 1,
'right_pupil_diameter': 5.307220,
'right_pupil_validity': 1,
'right_gaze_origin_in_user_coordinate_system': (32.52792358398, -2.97285223007, 640.345520),
'right_gaze_origin_in_trackbox_coordinate_system': (0.431783, 0.495703, 0.483452),
'right_gaze_origin_validity': 1}
Wow! Look at all that data from just one sample!
Now we need to tell the eye tracker to actually use our callback function. This part is super easy:
# Start the callback function
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)
What we’re doing here is subscribing to the EYETRACKER_GAZE_DATA
stream and telling it to send all that data to our gaze_data_callback
function. Once this runs, your console will start flooding with data!
Global to save
Great! You’ve set up a callback function that receives eye tracking data from your device. However, printing 300 data points per second to the console creates an unreadable stream of text that’s not useful for analysis. We need to store this data properly.
Let’s create a list to collect all the gaze data:
# Create an empty list we will append our data to
= [] gaze_data_buffer
Perfect! We’ve got our list to which we can append the incoming data. We can simply have this list inside of our callback function so every time an new sample appears it will be added there. This is how our script could look now:
# callback function
def gaze_data_callback(gaze_data):
global gaze_data_buffer
gaze_data_buffer.append(gaze_data)
# Create an empty list we will append our data to
= []
gaze_data_buffer
# Start the callback function
=True) Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback, as_dictionary
Notice the global
keyword in the callback function. This tells Python that we want to use the gaze_data_buffer
variable that was created outside the function. Without this keyword, Python may create a new local variable instead of using our existing list.
Now instead of flooding the console, every new sample gets stored in our list for later analysis!
Triggers/Events
As we’ve seen, our callback function can access Tobii data and process each sample. But there’s one crucial piece missing: we need to know exactly when we presented our stimuli.
In most studies, we present various stimuli - pictures, sounds, or videos. For meaningful analysis, we must know the precise timing of when each stimulus appeared.
The Tobii SDK provides an elegant solution through the tr.get_system_time_stamp()
function. This function returns the current time using the same system clock that the eye tracker uses for its data timestamps. Remember that each gaze sample includes a system_time_stamp
field? This means we can perfectly synchronize our events with the gaze data.
We can create a simple event-logging system by storing timestamps alongside descriptive labels:
= [] # create empty list
Events
# Get the current time from the eye-tracker's clock
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Our First event!!'}) Events.append({
Save the data
Perfect! Now we have two lists containing all our information. They grow continuously (especially the gaze_data_buffer
) and we need an efficient way to save them.
There are two common approaches:
Save immediately: Write data inside the callback function, appending to a CSV each time
Save at the end: Collect all data in memory for the entire study, then save everything once
Both approaches have serious drawbacks. The first might slow down our callback function, potentially causing us to miss samples if the computer struggles with frequent file operations. The second approach avoids callback bottlenecks, but if Python crashes during the study (and trust me, it happens!), we’d lose all our precious data.
The solution? A hybrid approach! We store data in memory but save it periodically during quieter moments - like the Inter-Stimulus Interval (ISI) between trials. This timing is perfect because participants are resting anyway.
The Buffer Swap Technique
The key challenge is avoiding data duplication when we save repeatedly. We need a way to save current data without affecting ongoing data collection.
Our solution uses a buffer swap:
Now we need to save our data efficiently. The key challenge is avoiding duplicate data - we don’t want to save the same samples multiple times when we call our save function repeatedly.
Our solution is simple: we can add our sample to a list and once we want to save we can switch it with a new e,pty one!!! Here, let me show how:
# Swap buffers
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, [] saving_events, Events
Here’s what happens in this single line:
gaze_data_buffer
(full of samples) gets copied tosaving_data
gaze_data_buffer
simultaneously becomes a fresh, empty listThe same happens for our Events
Why this works: This happens instantly in one step, so we don’t lose any data. While we’re saving the old samples, new samples keep getting collected in the fresh, empty list.
Now we can safely process and save saving_data
and saving_events
while data collection continues uninterrupted in the background.
Align events and data
Next, we need to match up our events with the eye tracking data. First, let’s turn our lists into dataframes to make them easier to work with:
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events) events_df
Now comes the tricky part: our events and eye tracking data have different timestamps, but we want to know what event was happening during each eye tracking sample.
Think of it like this: imagine you have a timeline of eye tracking samples (every few milliseconds) and a few event markers (like “stimulus appeared”). We need to figure out which eye tracking samples happened during each event.
# Find the closest eye tracking sample for each event
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side
# Add event labels to our eye tracking data
'events'] = ''
data_df['events'] = events_df['label'].values data_df.loc[idx,
The searchsorted
function looks at all our eye tracking timestamps and finds where each event timestamp would fit in. It’s like inserting event markers into our timeline of eye tracking data.
Now each row of eye tracking data shows what was happening at that moment! This wasn’t so hard, was it?
I’d recommend putting this whole process in a function so you can easily save your data whenever needed:ù
# Saving function
def write_buffer_to_file(filename):
global gaze_data_buffer, Events
# Check if there are data
if not gaze_data_buffer:
return
# Swap buffers - get current data and start fresh
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, []
saving_events, Events
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events)
events_df
# Match events with eye tracking data
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side# Add event labels to our eye tracking data
'events'] = ''
data_df['events'] = events_df['label'].values
data_df.loc[idx,
# Save to CSV file (append mode so we don't overwrite previous saves)
='a', index=False,
data_df.to_csv(filename, mode=not os.path.isfile(filename)) header
This function automatically grabs all the current data, swaps it with fresh empty lists, and saves everything. The mode='a'
parameter tells pandas to append new data to the end of an existing file rather than overwriting it. This is perfect for our incremental saving approach since each time we save, we add new rows to our growing CSV file instead of losing previous data.
The header=not os.path.isfile(filename)
part is a clever trick that writes column headers only when creating a new file. This prevents duplicate headers from appearing throughout your data file every time you save.
You could also save gaze data and events as separate CSV files during collection, then align and merge them during analysis.
However, we prefer combining them immediately for two reasons: it eliminates an extra step later, and it creates cleaner data files for the upcoming tutorials. Both approaches are perfectly valid - choose whichever fits your workflow better. If you’re just starting out, combining them now will make your life easier down the road.
Clean and Prepare Your Data
Great! Now you know how to collect eye tracking data and save it with events. Let’s add a few steps that will make your data much easier to work with and analyze.
Split Gaze Coordinates into Separate Columns
When we save gaze data, coordinate pairs like 'left_gaze_point_on_display_area': (0.463, 0.487)
become single columns containing tuples. This makes analysis difficult - you can’t easily plot or calculate with x and y values when they’re stuck together. We need separate columns for x and y coordinates for each eye.
Let’s update our save function:
# Save function
def write_buffer_to_file(filename):
global gaze_data_buffer, Events
# Check if there are data
if not gaze_data_buffer:
return
# Swap buffers - get current data and start fresh
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, []
saving_events, Events
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events)
events_df
# Match events with eye tracking data
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side'events'] = ''
data_df['events'] = events_df['label'].values
data_df.loc[idx,
# Split coordinate tuples into separate columns
'left_x', 'left_y']] = data_df['left_gaze_point_on_display_area'].tolist()
data_df[['right_x', 'right_y']] = data_df['right_gaze_point_on_display_area'].tolist()
data_df[[
# Save to CSV
='a', index=False,
data_df.to_csv(filename, mode=not os.path.isfile(filename)) header
Perfect! Now we have separate x and y columns for both eyes, making analysis much easier.
Adjust the Data
The Tobii eye tracker gives us coordinates from 0 to 1, where (0, 0) is the top-left corner of the screen and (1, 1) is the bottom-right corner.
While this works fine, it can be confusing during analysis because most plotting systems expect the origin in the bottom-left corner, not the top-left. The image below shows this difference:
Let’s adjust our data to make it more analysis-friendly:
Flip the y-axis: Move the origin to bottom-left by flipping y coordinates
Convert to pixels: Change from 0-1 coordinates to actual pixel positions
Simplify timestamps: Convert from microseconds to milliseconds
Clean up column names: Make them shorter and more intuitive
Here’s our updated function:
# Screen dimensions (replace with your actual screen size)
= [1920, 1080] # width, height in pixels
winsize
def write_buffer_to_file(filename):
global gaze_data_buffer, Events
# Check if there are data
if not gaze_data_buffer:
return
# Swap buffers - get current data and start fresh
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, []
saving_events, Events
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events)
events_df
# Match events with eye tracking data
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side'events'] = ''
data_df['events'] = events_df['label'].values
data_df.loc[idx,
# Split coordinate tuples into separate columns
'left_x', 'left_y']] = data_df['left_gaze_point_on_display_area'].tolist()
data_df[['right_x', 'right_y']] = data_df['right_gaze_point_on_display_area'].tolist()
data_df[[
# Convert and adjust coordinates
'time'] = data_df['system_time_stamp'] / 1000.0
data_df['left_x'] = data_df['left_x'] * winsize[0]
data_df['left_y'] = winsize[1] - data_df['left_y'] * winsize[1] # Flip y-axis
data_df['right_x'] = data_df['right_x'] * winsize[0]
data_df['right_y'] = winsize[1] - data_df['right_y'] * winsize[1] # Flip y-axis
data_df[
# Rename columns for clarity
= data_df.rename(columns={
data_df 'left_gaze_point_validity': 'left_valid',
'right_gaze_point_validity': 'right_valid',
'left_pupil_diameter': 'left_pupil',
'right_pupil_diameter': 'right_pupil',
'left_pupil_validity': 'left_pupil_valid',
'right_pupil_validity': 'right_pupil_valid'
})
# Keep only essential columns
= data_df[['time', 'left_x', 'left_y', 'left_valid', 'left_pupil', 'left_pupil_valid',
data_df 'right_x', 'right_y', 'right_valid', 'right_pupil', 'right_pupil_valid', 'events']]
# Save to CSV
='a', index=False,
data_df.to_csv(filename, mode=not os.path.isfile(filename)) header
Now our data is in pixel coordinates with the origin at the bottom-left, making it much easier to analyze and visualize!
Create the Actual Experiment
Now we have two functions: one to collect eye tracking data and another to save it to CSV. Let’s see how to use these in our actual study.
Short Recap of the Paradigm
We’ll use the experimental design from Getting started with PsychoPy and add eye tracking to it. If you need a refresher on the paradigm, take a quick look at that tutorial.
Here’s a brief summary: After a fixation cross, participants see either a circle or square. The circle predicts a complex shape that will appear on the right side of the screen, while the square predicts a simple shape will on the left.
Putting It All Together
Let’s build the complete experiment step by step.
Import Libraries and Define Functions
First, let’s import our libraries and define the functions we created earlier:
import os
from pathlib import Path
import pandas as pd
import numpy as np
# Import PsychoPy libraries
from psychopy import core, event, visual, sound
import tobii_research as tr
#%% Functions
# Screen dimensions
= [1920, 1080] # width, height in pixels
winsize
# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
global gaze_data_buffer
gaze_data_buffer.append(gaze_data)
def write_buffer_to_file(filename):
global gaze_data_buffer, Events
if not gaze_data_buffer:
return
# Swap buffers - get current data and start fresh
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, []
saving_events, Events
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events)
events_df
# Match events with eye tracking data
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side'events'] = ''
data_df['events'] = events_df['label'].values
data_df.loc[idx,
# Split coordinate tuples into separate columns
'left_x', 'left_y']] = data_df['left_gaze_point_on_display_area'].tolist()
data_df[['right_x', 'right_y']] = data_df['right_gaze_point_on_display_area'].tolist()
data_df[[
# Convert and adjust coordinates
'time'] = data_df['system_time_stamp'] / 1000.0
data_df['left_x'] = data_df['left_x'] * winsize[0]
data_df['left_y'] = winsize[1] - data_df['left_y'] * winsize[1] # Flip y-axis
data_df['right_x'] = data_df['right_x'] * winsize[0]
data_df['right_y'] = winsize[1] - data_df['right_y'] * winsize[1] # Flip y-axis
data_df[
# Rename columns for clarity
= data_df.rename(columns={
data_df 'left_gaze_point_validity': 'left_valid',
'right_gaze_point_validity': 'right_valid',
'left_pupil_diameter': 'left_pupil',
'right_pupil_diameter': 'right_pupil',
'left_pupil_validity': 'left_pupil_valid',
'right_pupil_validity': 'right_pupil_valid'
})
# Keep only essential columns
= data_df[['time', 'left_x', 'left_y', 'left_valid', 'left_pupil', 'left_pupil_valid',
data_df 'right_x', 'right_y', 'right_valid', 'right_pupil', 'right_pupil_valid', 'events']]
# Save to CSV
='a', index=False, header=not os.path.isfile(filename)) data_df.to_csv(filename, mode
Load the Stimuli
Now let’s set up our experiment window and load all the stimuli. This part is identical to our previous PsychoPy tutorial:
#%% Load and prepare stimuli
# Setting the directory of our experiment
r'<<< YOUR PATH >>>>')
os.chdir(
# Now create a Path object for the stimuli directory
= Path('EXP') / 'Stimuli'
stimuli_dir
# Load images
= visual.ImageStim(win, image=str(stimuli_dir / 'fixation.png'), size=(200, 200))
fixation = visual.ImageStim(win, image=str(stimuli_dir / 'circle.png'), size=(200, 200))
circle = visual.ImageStim(win, image=str(stimuli_dir / 'square.png'), size=(200, 200))
square complex = visual.ImageStim(win, image=str(stimuli_dir / 'complex.png'), size=(200, 200), pos=(250, 0))
= visual.ImageStim(win, image=str(stimuli_dir / 'simple.png'), size=(200, 200), pos=(-250, 0))
simple
# Load sound
= sound.Sound(str(stimuli_dir / 'presentation.wav'))
presentation_sound
# List of stimuli
= [circle, square] # put both cues in a list
cues = [complex, simple] # put both rewards in a list
targets
# Create a list of trials in which 0 means winning and 1 means losing
= [0, 1, 0, 0, 1, 0, 1, 1, 0, 1 ] Trials
Start recording
Now we’re ready to find eye trackers connected to the computer and start collecting data. We’ll use the first eye tracker we find and launch our callback function to begin data collection.
#%% Record the data
# Find all connected eye trackers
= tr.find_all_eyetrackers()
found_eyetrackers
# We will just use the first one
= found_eyetrackers[0]
Eyetracker
# Create our data buffers
= []
gaze_data_buffer = []
Events
# Start recording
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)
Present Our Stimuli
The eye tracking is running! Let’s show our participant something!
Notice that after each time we flip our window (which actually displays what we drew), we add an event to our Events list with a timestamp and label. This marks exactly when each stimulus appeared.
#%% Trials
for trial in Trials:
### Present the fixation
# Clear the window
win.flip()
fixation.draw()
win.flip()'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Fixation'})
Events.append({1) # Wait for 1 second
core.wait(
### Present the cue
cues[trial].draw()
win.flip()if trial == 0:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Circle'})
Events.append({else:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Square'})
Events.append({3) # Wait for 3 seconds
core.wait(
### Wait for saccadic latency
win.flip()0.75)
core.wait(
### Present the target
targets[trial].draw()
win.flip()if trial == 0:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Complex'})
Events.append({else:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Simple'})
Events.append({
presentation_sound.play()2) # Wait for 2 seconds
core.wait(
### ISI and save data
win.flip()= core.Clock() # start the clock
clock 'DATA') / 'RAW' / (Sub + '.csv'))
write_buffer_to_file(gaze_data_buffer, Path(1 - clock.getTime()) # wait for remaining time
core.wait(
### Check for escape key to exit
= event.getKeys()
keys if 'escape' in keys:
win.close()
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) core.quit()
As we said before in [Save the data], it’s best to save data during our study to avoid potential data loss. It’s better to do this when there are things of minor interest, such as during the ISI. If you remember from the previous tutorial Getting started with Psychopy, we created the ISI in a different way than just using core.wait()
, and we said that this different method would come in handy later on. This is the moment!
Our ISI starts the clock and saves the data immediately. After saving, we calculate how much time remains to reach the full 1-second duration and use core.wait() for any remaining time. This ensures we wait for exactly 1 second total, accounting for the time spent saving data.
### ISI and save data
win.flip()= core.Clock() # start the clock
clock 'DATA') / 'RAW' / (Sub + '.csv'))
write_buffer_to_file(gaze_data_buffer, Path(1 - clock.getTime()) # wait for remaining time core.wait(
Careful!!!
If saving the data takes more than 1 second, your ISI will also be longer. However, this should not be the case with typical studies where trials are not too long. Nonetheless, it’s always a good idea to keep an eye out.
Stop recording
Almost done! We’ve collected data, sent events, and saved everything. The final step is to stop data collection (otherwise Python will keep getting endless data from the eye tracker!). We simply unsubscribe from the eye tracker:
# Final save to catch any remaining data
'DATA') / 'RAW' / (Sub + '.csv'))
write_buffer_to_file(Path(
# Close the window
win.close() # Stop eye tracking
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # End the study core.quit()
Note that we also closed the Psychopy window, so that the stimulus presentation is also officially over. Well done!!! Now go and get your data!!! We’ll see you back when it’s time to analyze it.
END!!
Great job getting to here!! it want easy but you did it. Here is all the code we made together.
import os
from pathlib import Path
import pandas as pd
import numpy as np
# Import PsychoPy libraries
from psychopy import core, event, visual, sound
import tobii_research as tr
#%% Functions
# Screen dimensions
= [1920, 1080] # width, height in pixels
winsize
# This will be called every time there is new gaze data
def gaze_data_callback(gaze_data):
global gaze_data_buffer
gaze_data_buffer.append(gaze_data)
def write_buffer_to_file(filename):
global gaze_data_buffer, Events
# Check if there are data
if not gaze_data_buffer:
return
# Swap buffers - get current data and start fresh
= gaze_data_buffer, []
saving_data, gaze_data_buffer = Events, []
saving_events, Events
# Convert lists to dataframes
= pd.DataFrame(saving_data)
data_df = pd.DataFrame(saving_events)
events_df
# Match events with eye tracking data
= np.searchsorted(data_df['system_time_stamp'].values,
idx 'system_time_stamp'].values,
events_df[='left')
side'events'] = ''
data_df['events'] = events_df['label'].values
data_df.loc[idx,
# Split coordinate tuples into separate columns
'left_x', 'left_y']] = data_df['left_gaze_point_on_display_area'].tolist()
data_df[['right_x', 'right_y']] = data_df['right_gaze_point_on_display_area'].tolist()
data_df[[
# Convert and adjust coordinates
'time'] = data_df['system_time_stamp'] / 1000.0
data_df['left_x'] = data_df['left_x'] * winsize[0]
data_df['left_y'] = winsize[1] - data_df['left_y'] * winsize[1] # Flip y-axis
data_df['right_x'] = data_df['right_x'] * winsize[0]
data_df['right_y'] = winsize[1] - data_df['right_y'] * winsize[1] # Flip y-axis
data_df[
# Rename columns for clarity
= data_df.rename(columns={
data_df 'left_gaze_point_validity': 'left_valid',
'right_gaze_point_validity': 'right_valid',
'left_pupil_diameter': 'left_pupil',
'right_pupil_diameter': 'right_pupil',
'left_pupil_validity': 'left_pupil_valid',
'right_pupil_validity': 'right_pupil_valid'
})
# Keep only essential columns
= data_df[['time', 'left_x', 'left_y', 'left_valid', 'left_pupil', 'left_pupil_valid',
data_df 'right_x', 'right_y', 'right_valid', 'right_pupil', 'right_pupil_valid', 'events']]
# Save to CSV
='a', index=False, header=not os.path.isfile(filename))
data_df.to_csv(filename, mode
#%% Load and prepare stimuli
# Setting the directory of our experiment
r'<<< YOUR PATH >>>>')
os.chdir(
# Now create a Path object for the stimuli directory
= Path('EXP') / 'Stimuli'
stimuli_dir
# Create a window
= visual.Window(size=winsize, fullscr=True, units="pix", pos=(0,30), screen=1)
win
# Load images
= visual.ImageStim(win, image=str(stimuli_dir / 'fixation.png'), size=(200, 200))
fixation = visual.ImageStim(win, image=str(stimuli_dir / 'circle.png'), size=(200, 200))
circle = visual.ImageStim(win, image=str(stimuli_dir / 'square.png'), size=(200, 200))
square complex = visual.ImageStim(win, image=str(stimuli_dir / 'complex.png'), size=(200, 200), pos=(250, 0))
= visual.ImageStim(win, image=str(stimuli_dir / 'simple.png'), size=(200, 200), pos=(-250, 0))
simple
# Load sound
= sound.Sound(str(stimuli_dir / 'presentation.wav'))
presentation_sound
# List of stimuli
= [circle, square] # put both cues in a list
cues = [complex, simple] # put both rewards in a list
targets
# Create a list of trials in which 0 means winning and 1 means losing
= [0, 1, 0, 0, 1, 0, 1, 1, 0, 1]
Trials
#%% Record the data
# Define the subject name
= 'S001'
Sub
# Find all connected eye trackers
= tr.find_all_eyetrackers()
found_eyetrackers
# We will just use the first one
= found_eyetrackers[0]
Eyetracker
# Create our data buffers
= []
gaze_data_buffer = []
Events
# Start recording
Eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)
#%% Trials
for trial in Trials:
### Present the fixation
# we flip to clean the window
win.flip()
fixation.draw()
win.flip()'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Fixation'})
Events.append({1) # wait for 1 second
core.wait(
### Present the cue
cues[trial].draw()
win.flip()if trial == 0:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Circle'})
Events.append({else:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Square'})
Events.append({3) # wait for 3 seconds
core.wait(
### Wait for saccadic latency
win.flip()0.75)
core.wait(
### Present the targets
targets[trial].draw()
win.flip()if trial == 0:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Complex'})
Events.append({else:
'system_time_stamp': tr.get_system_time_stamp(), 'label': 'Simple'})
Events.append({
presentation_sound.play()2) # wait for 2 seconds
core.wait(
### ISI and save data
win.flip()= core.Clock() # start the clock
clock 'DATA') / 'RAW' / (Sub + '.csv'))
write_buffer_to_file(gaze_data_buffer, Path(1 - clock.getTime()) # wait for remaining time
core.wait(
### Check for closing experiment
= event.getKeys() # collect list of pressed keys
keys if 'escape' in keys:
# close window
win.close()
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)# stop study
core.quit()
# Final save to catch any remaining data
'DATA') / 'RAW' / (Sub + '.csv'))
write_buffer_to_file(Path(
# close window
win.close() # unsubscribe eyetracking
Eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback) # stop study core.quit()