BehaviorCloud Logo

Open Field Activity Tracking Webinar with BehaviorCloud & San Diego Instruments

We thoroughly enjoyed the opportunity to co-present our Open Field Activity Tracking summer webinar alongside our friends at San Diego Instruments. If you’ve even been curious about the strengths and weaknesses of photobeam-based activity tracking in comparison with video-based tracking and you were unable to join us the other day, you can find the video below as well as the full text of the session for your reference.


Video link: https://youtu.be/7B2zkSUCPIU


Chloe: Welcome everyone to our webinar! We’re excited to go through this with you. Let’s go through a brief introduction; Carlos, do you want to go first?

Carlos: Hi everyone, my name is Carlos Arnaiz. I’m the behavioral neuroscientist and product research specialist for San Diego Instruments. I’ve been in the industry for about 25 years. I started as a biofeedback therapist and worked with humans for about five or six years. Went to graduate school at the University of Florida, where I did time with the Department of Psychology, Department of Neuroscience and Department of Physical Therapy before starting in industry. I worked with Coulbourn Instruments and Harvard Apparatus and directly through PanLab, and others before coming to San Diego [Instruments] four years ago. So quite happy to be here, and we appreciate all of your time.

Chloe: Yes, we are happy you’re here too, Carlos. Ok, I’ll go ahead and introduce BehaviorCloud and myself. BehaviorCloud was created in 2017 to improve the processes in the lab. We wanted to improve our workflow by establishing a web-based platform to increase efficiency, improve data collection, and also foster collaboration within the team. Therefore, BehaviorCloud was born. I joined the team in early 2019 and since then I’ve helped Labs across the globe with their research.

Previously, I was a researcher for the Army and I was studying PTSD and TBI while doing my Masters at Johns Hopkins University. Then I went on to become a behavioral research consultant for both human and animal lab models, at Noldus Information Technology. We’re happy that you all could come, and we’re happy that San Diego Instruments is co-hosting this “Open Field Activity Tracking” webinar event with us. So let’s move on.

We want this webinar to be as interactive as possible so we would really appreciate it if you could ask questions in the chat window. If you’re on Zoom, you can visualize the chat window by clicking either to the right or the top of the Zoom module on your computer. Everyone if you just want to test that out, you can write us a question during the course of the webinar. Feel free to stop us and ask us questions at any time during the webinar.

I’ll go ahead and launch the poll now. We just want to ask you, have you used the open field in your research before? This is for us to gauge where you are in your research journey and how experienced you are. So we have a few people answering questions right now. 10 out of 15 people are voting, and I’ll go ahead and stop the poll now. Some people are on the phone, so they can’t answer. Everyone should be able to see the results now, as you can see most people have used video tracking with their open field research, some people have used IR-beam based tracking. And some are choosing “other”, therefore we must have some newbies to the group as well. And like I said feel free to ask questions during the webinar in the chat window.

That leads me to: what is an open field used for? As you can see from the poll, we have video based tracking and IR beam based tracking; well, these are the types of tools your can use for tracking the open field. But the open field at its most basic form is basically an area or object which you place an animal into and you can observe its behavior. This test is very modifiable so depending on what you’re studying you can add different things to it.

For example, you can change the color of the arena that you’re using. You can add different things to it, like shock inserts, dividers, you can add items for novel object recognition. I seen it used with rats and mice, but you could use it for something as large as small mammals, non-human primates to fish or even insects. Does anyone have any specific area of interest? It is most commonly used for anxiety/depression studies as it was first used for studying angiogenic or angiolytic effects specifically with pharmaceutical companies since you can see the effect of a drug on their locomotor activity. For example, animals staying in the middle of the arena versus the border, and what does that behavior mean for the researcher? Carlos, do you want to expand on any of these topics?

Carlos: Sure, anytime that you use a video tracking system, you can track anything that the camera can see. So whatever behavior you might use to score with your own eyes, you can use in this environment, whether it be an open field or some other environment. The key here is to know what it is you’re ultimately looking for. If there’s any particular behaviors that you are looking to get more information about by all means just type it into the chat box and we’ll make sure to highlight on that.

Chloe: Okay, so I guess we can move on from there. Like Carlos mentioned there are two methods, video based tracking and also IR beam break tracking. And within these two types of experiments, you can do both short-term and long-term experiments. Carlos, do you want to start off with the IR beam break-based tracking?

Carlos: Sure. Photobeam activity detection is one of the longest-standing ways of instrumenting behavior in neuroscience. Video [based tracking] has been around a long time, but it’s always been very expensive, up until recently, and photobeams are very straightforward. You find them in your environment as you move about your world without even realizing it, and in the sense of the laboratory setting they’re quite robust. They allow you to track an object in a field and multiple fields simultaneously with a very low resource of computer or time on your part.

So the main types are video and photobeam as indicated here. And whether you are doing long-term or short-term, photobeam systems are very useful. Whether you’re looking at a drug dose response on the order of minutes, maybe an hour or two or whether you are looking at circadian, 24-hour data over multiple days. With video tracking you can still do long-term experiments, but it gets a little tedious. And then the number of subjects that you can look at simultaneously is also limited by bandwidth, which is based on the computers that are available. But the main systems that you see can also be superimposed, sort of like belt-and-suspenders. There’s nothing to prevent you from doing both so there are hybrids available.

For instance, you can have a photo beam system doing both locomotor and rearing detection, or even light-dark box comparisons; or you can have a video camera superimposed where the rearing detection is only done by the photo beams or the light-dark activity whether the animals are in or out of view of the camera so there are ways that you can have your cake and eat it too by taking advantage of the strengths and limitations of each individual system.

Chloe: Ok that’s great to know. Someone has asked in the past, if I was going to choose a system that would allow for more automation, which one would be more beneficial?

Carlos: I would personally think that a photobeam system allows for the most automation and ease of set-up simply because, again, it depends on the environment and what you’re ultimately looking for, but I think the question here was about the Lux requirements. Photobeam systems are less affected by ambient light, and they even work in total darkness. With video-tracking systems you’re limited by the camera, the size of your lens, or the iris adjustment, but then again now we have infrared illumination, where the camera can see in the infrared, and I don’t mean multicolored rainbow, I literally mean like night-vision, where we can illuminate in a spectrum that the subject cannot see, and the camera can see and therefore record. So in a properly set up situation: a low light session situation, you can’t tell whether the lights are on or off in the room.

The visible lights are on/off if you’ve got it properly illuminated. And going back to what’s easier, photobeams are not bothered by the general illumination of the room obviously you can have a fluorescent bulb directly above it and that might cause some glare into the photo cells themselves, but that is the exception. Rather than the rule where is a video camera you’ve got to make sure that your lighting is set up properly to avoid glare, ideally indirect light is best. Therefore, it’s easier to set up a photobeam system as opposed to a video based system. And the physical number of Lux really depends. I’ve seen a standard camera go down to three or four Lux without a problem. It’s when you get down to zero, i.e. absolute darkness, that you’re going to have an issue.

Chloe: And what do you think about doing video tracking in red light, I’ve seen a trend with that recently?

Carlos: Well that’s an excellent point. A lot of people, especially when we’re talking about rodents, don’t realize that they’re most active at dawn and dusk when the light in their natural environment has the most red in it. So depending on what we’re talking about, red light, if we were talking about infrared light then the animals can’t see that, if we’re talking about the generic red light bulb that is still in the visible spectrum that might actually stimulate activity then that’s different. What I think most people are going for is a low-light level or a low Lux set of lumens. Again depending on question, depending on the light cycle, depending on the time of day, you can manipulate all of these things and ask different questions, so for generating or eliciting more or less activity in a control subject, that’s an empirical question that most people have to ask themselves in a pilot experiment.

Chloe: And I know we kind of touched on the differences between video-tracking and photobeam based tracking, but can you go over the details of what is an IR beam break, how does it actual track the animal.

Carlos: Excellent question. If you see in the photo on the screen there in the upper right-hand corner, you’ve got a plexiglass Arena and that could be your generic arena whether you’re doing a video-based or photobeam based, but what you see here is a metal frame about the perimeter that has a series in this case of photo emitters and photodetectors. Here it’s a 16 by 16 beam array and each pair of beams is activated one at a time, sampling through the arena at a high rate so if the subject is moving about in the arena we can triangulate the subject’s location based on which beams are obstructed at any given moment.

We can sample here at 50 hertz, 100 Hertz and very quickly know where the subject is and they will presumably be breaking multiple beams in the x-axis and the y-axis. And if they’re standing up, they’re breaking beams in the Z plane. You can see here there’s two sets of beams. One lower for locomotor floor plan activity, and an upper for when the animal rears up to do some exploratory behavior up the wall.

Chloe: Would you say that there’s a difference between unsupported rearing and supported rearing?

Carlos: Well I think it means different things to different people, but you know there’s there’s rearing activity where the animal is arching its head up, you know maybe standing on one foot or reaching up with one hand, and that kind of thing, but that’s something that the photobeam systems in general as a whole won’t be able to discriminate against. And also video-based systems are mostly viewing from above so it’s really up to what the camera can see and what the camera can discriminate. So if we’re going to take a sidetrack here in the rearing, it’s about elongation mostly, and a video tracking system whether the animal is longer and presumed on all fours or shorter and squatter presumably rearing up, but that’s up to the resolution.

Going back to the differences and how a photo beam system detects the animal, it’s constantly sampling and event marks in a datafile every time there’s a change in coordinate location. The systems are predominately looking for the center of mass of the subject there’s no head versus tail, it’s just centroid location, so if the animal is walking about the center versus thigmotaxic wall-walking aka hugging the walls. We will know where the center of mass is. With a video based system, the resolution is whatever the resolution is of your camera, how zoomed in you are, how many fields of view are you tracking, how many subjects are in a single field of view, so obviously if you save the recording, you don’t have to, most people do, but if you save the video recording, you can always go back and then hand-score for the things that can’t be automated, such as supported or unsupported rearing.

If a subject is scratching a left ear versus a right ear, no photobeam system is going to be able to tell you that, and I doubt any video system can tell you that either. But we rely on the ability to naturally observe the raw data from a recording and superimpose those things, event mark those things on a datafile.

Chloe: With photobeam systems, you can probably connect more than you could with a video-based system, right?

Carlos: Correct, most video-based systems allow for a maximum of four cameras or thereabouts. More than four cameras per computer you start to tax the processor, and also the hard drive, since it is a lot of data that comes in from video and it just takes up a lot of size even at lower resolutions. With a photobeam system, you can typically have as many as thirty-two open fields running simultaneously, if you’re just looking at x-y coordinate data. As soon as you add rearing, you divide that bandwidth in half, so you now have 16 arenas that you can test simultaneously, which is still a lot more. Most people run out of lab space before they run out of processing power on their computers

Chloe: Yes, that’s pretty good, and a pretty big cohort all at once.

Carlos: Yes, it’s a lot easier to just automatically start collecting data as soon as it detects a subject and just loading in subject after subject after subject and being able to run for a long period of time and not have to worry about overloading your hard drive, or storing a large data file. For high throughput, photobeam systems have always been ideal.

Chloe: Carlos, I believe we received a question from Tom, it says, “can mice see IR light?”

Carlos: Well by definition they can’t. Now there are poor quality infrared emitters and high-quality infrared emitters, and anyone who’s ever seen a surveillance camera with a ring of LEDs around it can tell the difference. If you can see a little red pin light, or small beam of light, or a tiny glow from the LED, well that’s an LED that not only has infrared that you can’t see but regular red that you can see. The higher-quality LEDs you can’t tell whether they’re on or off when they’re powered on unless you’re looking through the lens of a camera.

Some smartphones have camera lenses that can see it, some don’t, but that’s a real giveaway between a quality infrared emitter illuminator and one that’s more consumer-grade. I will say this about what light they can and cannot see; there are some papers out there about whether or not albinos can detect red from green from yellow from white. I think those papers have gone now 30-40-50 years. Some people debate whether it’s the wavelength or simply the intensity of light, so it might be a power question. I just dangle that out there for you to consider.

Chloe: Yes, it’s good for them to be informed so that they can make their own decisions and opinions about this as well. Now we can move on to the Hardware Bridge. This is the device that can connect to San Diego instruments’ system with the BehaviorCloud platform. I’ll go ahead and press start. As you can see, we put a Sphero in the [photobeam activity system] and it can can detect on the XY coordinates of the animal within the arena. So that all the data is coming from San Diego Instruments in real time as shown here with recording on the iPhone. You can record multiple setups at a time, and essentially use the bridge to integrate any third-party device to the BehaviorCloud platform to upload and analyze the data right there.

You can bypass your computer and have it streaming on your mobile device, either your Android device or your Apple device. You just set up the test, type in the experiment name, collect your data, analyze it, all from the web. This is easy because if you wanted to monitor your experiments at home, say it’s a long-term study, and you go home for the night, and you want to check to make sure your experiment is still running. You just login with your username and password and check if it’s still collecting data. You don’t have to worry about losing any data if someone leaves the lab, or the computer turns off, you just have it all there. You can share it with other lab members or across other institutions. [For this reason] it’s really nice for reproducing results or fostering collaboration.

Carlos: Yes, one of the nice things I find about the hardware bridge is that you don’t need the computer in the laboratory as long as you’ve got that ethernet/wifi connection from the control box that the hardware is all connected to. Instead of going to a computer, you go into the bridge, and you’re instantly on the cloud and all of the data is stored there, but can view live as it’s happening from there, which is a nice feature, especially if multiple people want to observe what’s going on in a particular arena or group. All of the data appends as it goes, so it’s a nice way to move away from a computer, you worry about viruses or hard drive storage and if there’s a power outage in that kind of thing so it’s a nice safety-net.

Chloe: Yes, I remember in my lab we were constantly uploading data onto a CD, and it was kind of a hassle trying to get it onto a regular computer since new laptops do not have disk drives anymore. So BehaviorCloud definitely makes it a lot easier. So I know, we have already touched base on some strengths and limitations, but I guess you can say for the open field some of the strengths you know it’s easy to use, it’s easily modifiable to whatever the researcher is studying. You don’t have to train the animal to do anything. You just place the animal in there and see what happens. For the limitations, you know maybe it needs to be used in conjunction with another test because you may not be able to distinguish if this behavior is related to anxiety or is this fear-induced behavior related to the lighting or something else.

Carlos: Yes, I would agree. If you’re talking about fear conditioning. A photobeam system really isn’t going to be as precise in terms of being able to see whisker movement or very subtle breathing, rapid breathing, that kind of thing as a video system might. However, if you’re just interested in conditioning the animal and you know that your animals are either going to condition or they’re not. And you want to run many subjects through quickly, then a photobeam system will be more efficient. At the end of the day, it’s about throughput and resolution. If you are looking for very subtle behaviors that a human observer is ultimately going to have to validate then a photobeam system is going to be limited in that respect. And that’s why you might want to superimpose a video or go with video altogether.

Chloe: Yes, but it’s also less bias if you have a photobeam system. You place the animal in there, you can practically run it for as long as you want. The data you are collecting isn’t as large as the video based tracking. You can start it and leave it and come back.

Carlos: Right. You end up with a very long list of X-Y coordinates over time as opposed to high-resolution images that you ultimately only need a few frames per second from.

Chloe: So we got a couple more questions. Tom says, “can you do the analysis for the data in the cloud too?” The answer to that is yes. You can do all of the analysis in the BehaviorCloud platform. You can collect data like time spent in zone, latency, distance traveled, etc. And then Ashley asks, “is there something that can be used for social groups?” So it depends on what type of social experiment you’re doing. If it’s multiple animals in the same arena then you probably want to do video based tracking, and you probably want use a combination of hand scoring and also you can color mark the animal to identify the animal of interest. But then again it depends on how many animals you are testing.

Carlos: Yes, I’d like to add to that. It depends on how many experimental subjects you have. In social interaction, how many stranger animals do you have. Typically, in a video tracking system, you hide all of the subjects from the field of view other than the one your studying. Photobeam systems are really only designed to have one subject for an arena, and to my knowledge there is only one really good video tracking system that can track multiple subjects in the same arena, and as Chloe is saying they all have to be color coded, and they literally need to have their coats dyed and then you’re limited by the differences in the color.

Chloe: And it’s difficult if the number is more than four at a time, since you need that many different colors. The leaders in the social interaction industry, don’t actually like video-based tracking because the animals and their inherent nature is to climb on top of each other. So in reality, if they’re being as interactive as possible, they’re cuddling together and climbing on top of one another, which makes the color mark not visible, which means it’s difficult with video tracking also. But also, if you want to do a social defeat task or sociability protocol, then you would restrain animals from the animal of interest, and then it is possible, since you’re only tracking one animal at a time. Could you also do this with photo beam if they are constrained?

Carlos: The only time I’ve ever seen a successful social interaction tasks with a photobeam, is if you have one subject in one arena that you’re tracking with a photo beam and all of the other subjects are outside of that physical space so only the one subject can break the beams, then it’s kind of like a choice whether I’m spending time near one group of subjects or away from that group of subjects. The photobeams don’t know who’s breaking the beam, they just know that there’s an obstruction and if you position them on, you know where they are, and you know that means location and time and that location.

Chloe: Open field protocols. So one of the questions I’m frequently asked is how long do I run the open field experiment for? Some people have said 10 minutes, some people have said 15 minutes, or even more than an hour. What’s your opinion on this, Carlos?

Carlos: Well, it depends on the question. It depends on if you are looking at a drug that takes an hour to come on or 10 minutes to come on and also whether or not you are going to be doing something in the middle of the experiment like an injection or some other stimulus like a light or shock. The short answer is, with a photo beam system that’s not really a limitation. Although I would say once you start getting into 24 hours a day, 7 days a week, data management and going through and it ends up being like 23 and 1/2 hours a day, because you go in there you have to service the case for some reason or another. That’s really more of an issue with video tracking system.

Chloe: And do you have any good practices for conducting your open field protocol, maybe it has to do with the lighting, maybe it has to do with cleaning.

Carlos: Well absolutely cleaning, especially if it isn’t a home cage environment. I will say this most open field arenas are made of acrylic and people want to use some sort of alcohol-based cleaner. I highly encourage people not to use any alcohol based cleaner, simply because the alcohol evaporates so quickly that the acrylic cracks and breaks over time. You do want to use Dawn dish soap with warm soapy water or some other cold sterilization. The one we use is called Airx RX. Any liquid sanitizers that are not alcohol based.

In terms of lighting, generally in these kinds of experiments you either want to do bright light if you are trying to stress the animal, or low light if you’re trying to encourage more activity. The question is going to dictate that environment. The point is that you can do either. In terms of affecting the Photobeam instrument it’s not very susceptible to that kind of thing. In video systems, lighting is everything. Poor lighting is going to give you poor data, especially if you have glare. Whether you want it dark, or whether you want a brightly-lit or somewhere in between, that’s really going to be dictated by the nature of the question being asked.

Chloe: Yes, I’ll also add that I’ve seen a lot of mazes or open fields that are made from anything so you just have to make sure that it’s a non porous material. E.g. don’t use anything like wood. On that note, make sure that the open field that you’re using suits the size of the animal. I’ve seen people use the little tubs or bowls you would see in a kitchen, and the animal jumps right out of those; therefore, with video based tracking when the animal hops out of view then you can’t track the animal, so just make sure you’ve covered all your bases.

Carlos: I will say that that’s the good thing about video-based tracking. The video doesn’t care what shape or size the arena is that you’re looking at so long as the entire arena is in your field of view of the camera. Whereas in a photo beam system you’re going to be limited in your size and it’s usually not huge because photobeams aren’t lasers; they’re more like flashlights: the closer you get to the object, the tighter the beam. As you increase the distance from the emitter to detector you get more of a dispersed distribution of the light. Another reason why there’s only one pair of emitter detectors that go around at a time circling the arena, but photo beams are going to have a fixed distance and video tracking systems you have to tell it in the software where your edges are and what your distances are. Whereas in a photobeam system, that’s predetermined by the hardware set if you will.

Chloe: Ok, next slide. Here comes all of the data and all of the parameters that you are looking for. After we’ve conducted our open field test, we want to know what to look for so these are some of the things that you should be looking at. Obviously, it depends on what you are studying. But let’s say we are studying anxiety/*]depression. What would you be looking for Carlos?

Carlos: In terms of the data analysis, the wonderful thing about all these is that you can go in with a basic question and look for a simple data point whether it’s distance travelled, or general activity over time, and then if you find something or don’t find something you can always go back after the fact and reanalyze looking for a different behavior because you’ve collected the behavior, and the history is there. That’s a wonderful thing about either of these two systems. You can publish a study and then 10 years later discover something and go back and look at these data and analyze them a different way. In terms of a particular assay, such a depression or whatnot, I prefer to think of them as frequency or counts of a particular behavior whether it be rearing or corner time or center time or whether you’ve painted the floor with a particular odor one corner versus the other. There’s all kinds of things you can do in an open field just look at the basic measures, most of what you have listed here.

Chloe: So we have a question. It says, “so is see-through better or worse, I think bigger is better?” For video based tracking, it depends on how your video based algorithm works. With BehaviorCloud, our algorithm is motion-based. So it doesn’t matter what color open field you have. However, most algorithms based it off of pixel color change, which means you should have a contrasting color from your animal so white sprague dawley rat with a white background is not very optimal. You want to do a black-6 mouse on a white background, or Sprague dawley rat on a gray or blue background, but for photobeam it doesn’t matter since the open fields are typically clear to allow IR-beams to come through.

Carlos: Right, so I think he is also talking about whether or not the subject if the arenas are side by side, then the animals can see each other or whatnot. With a see-through wall, the subjects can be distracted by what’s going on in the environment if not each other. Again with a photobeam system you can always darken in the walls from the outside and just leave that one strip of the photo beams around if you want it darker that way or you could put dividers or partitions between the arenas.

There’s also the option to have an isolation cabinet that you can put these whole things inside of to control whether they’re ambient light on or a bright light or fan or temperature in that particular chamber for instance. And bigger isn’t always better if you give an animal too much environment because then you can artificially stimulate extra exploratory behavior or fatigue in the exploratory behavior and simultaneously if it’s too small you don’t get as much exploratory time as you might want for whatever your particular question is. So generally if you’re studying a mouse, you’re going to want a slightly smaller arena, let’s say somewhere between 12 and 16 inches square and then with a rat you’re going to want to use 16 inches or potentially higher. 16inches is the standard, de facto standard, that I think all commercially available systems follow just because that’s how it started in the beginning with photobeam systems then everybody wanted to replicate everybody else. So if you need much larger, then a video tracking system is definitely the way to go. Our largest arena I think it’s 4 feet by 4 feet unless you want to look at you want to consider a water maze a 6-foot diameter tank for instance.

Chloe: But you also have to remember too, if you are doing video-based tracking, you are limited to the area below your ceiling.

Carlos: To a certain extent, you can always have different interchangeable lenses, like fisheye, wide angle, etc.

Chloe: Ok, so this is the final slide of the presentation, does anyone else have any questions? To add to you points earlier about the size of the arena, it is also in a rodents’ inherent nature to avoid large, open spaces; based on predator/prey behavior. Therefore, you do not want the open field to be too large for that reason as well.

Carlos: Yes if you tend to limit the area you get better behavior in a focused sesne. Fear conditioning tends to work better if you use a small to moderate area rather than a moderate to large area.

Chloe: Ok, Javier just sent me a question privately, it says, “have there been improvements to the SDI reporter to produce heat maps from XYZ data?

Carlos: Yes, I just sent Javier the answer. We have updated SDI reporter. For those of you who are unfamiliar with heat maps, it’s just another form of graphing your data. The new version of Excel actually has a nice graph for heatmaps, if you want to send us a sample of your data we can go through it with you. For those who aren’t familiar with the heat map or density map, you can look at how much time in an animal or group of animals is spending in a particular Zone Arena by color coding it, much like you would see on a weather forecast the amount of rain falling in a particular section of a map. If we have a data file or data set with coordinate locations and activity counts with each location we can grab that a lot of different ways either a track plot or a squiggly line or a heat map with color coded zones.

There’s another question about a round arena versus a square one. That’s also an excellent point. Some animals if given the opportunity will linger in a corner, and if you deny them a corner you might induce more behavior so in that situation a round open field is nice. Photo beams at this point are square. A round one could probably be accommodated but it would be custom,non-standard. A video tracking system could readily be accommodating of any shape of arena. It’s an interesting question where you can compare all things being equal and open fields that are square vs. round vs. hexagonal or octagonal. See the effect of corner angle or presence of the corner at all. I think that’s quite interesting personally.

Chloe: Feel free to keep chatting away in the chat window. We thank you for attending the webinar today, we hope that you learnt something from this.

Carlos: We’re available if anything comes up afterwards, when you’re thinking back on it. Feel free to reach out. I’m happy to have a conversation with you about these or any other things related to measuring behavior. My goal is always to make sure you’re getting the right tool for your particular research question, be it a particular instrument or methodology, so have no fear, reach out, I’m happy to help you regardless.

Chloe: We will also be sending you a copy of the webinar, and we’ll send you a survey to see what your feedback is like and how we can improve our future events. If you want a topic to be covered in the future we would welcome that feedback as well. Our contact info is listed here feel free to reach out to Carlos and I for BehaviorCloud and San Diego Instruments equipment as well or just to simply talk about neuroscience. Thank you all for attending, you’ll hear from us soon.


Please reach out to us with requests for future webinar topics at hello@behaviorcloud.com

← Back to Blog