SciLux

Season 4 - Episode 8 - Space Robotics

Hanna Siemaszko Season 4 Episode 8

After a couple of weeks of discussion we finally had the pleasure to talk to the head of the amazing space labs at SnT (University of Luxembourg) and one of the postdocs who is a member of the space robotics group (SpaceR). Prof. Miguel Olivares Mendez and Dr. Carol Martinez told us what a robot really is, what it means when it has some intelligence and how it is to create labs which emulate space conditions. We also touched about the increasingly important issue of space debris and talked about an upcoming conference iSpaRo24.

USEFUL LINKS

SnT - https://www.uni.lu/snt-en/
SpaceR - https://www.uni.lu/snt-en/research-groups/SpaceR/
Carol Martinez's Website - https://carolmartinez.github.io/
iSpaRo24 - https://www.isparo.space/

>> Hanna:

Hello, and welcome to SciLux the podcast where we talk about scientific developments and technological changes in Luxembourg, as usual, proudly powered by Research Luxembourg. And in today's show, we have two guests that I'm very, very happy to introduce. So, first of all, we have Dr. Carol Martinez, who is a member of the Space Robotics research group SpaceR and a mechatronics engineer with a PhD and Master of science in robotics and automation. Her research focuses on making robots perceive the world. And then we have Professor Miguel Olivarez Mendez, who is head of SpaceR and the head of the LunaLab and ZeroG Lab at SnT of the University of Luxembourg and the program director of the interdisciplinary Space Master. And we can already say the Space Master is changing its name into master in space technology and business. Thank you very much for coming today.

>> Carol Martinez:

Thank you, Hannah, for the invitation.

>> Miguel Olivarez Mendez:

Hello, Hannah. Thank you very much for the invitation.

>> Hanna:

I think that we have 40 minutes to probably only discuss one question. What are robots? But we have to also look at your research, but still to understand at least the first concept, because I was already going Sci-Fi, and I think a lot of listeners will go Sci-Fi and imagine terminators and what's not. So what are robots actually? How do you define them? When you specialize in robotics, what is a robot for you? Carol?

>> Carol Martinez:

So a robot is a machine that is able to perceive, understand, and act. This is different from, uh, an automatic machine. For example, the wash machine cannot change the program based on the weather conditions. So the robot has a series of sensors that allow it to react to what is happening around the environment. This is easier saying than doing it. You see how we can see? We can measure distances. We can know where we are. We can identify that something is risky, but this is difficult to do it for a robot, for making a robot per se, we need to include sensors on board the robot, and then we need the whole. Apart from the hardware itself, we need the whole software, which is the one that is going to make sense of the information that the sensors are providing.

>> Miguel Olivarez Mendez:

The usual understanding of a, uh, robot is that the robot looks like humans, but the robot is something more. We can start with what is robotics. Robotics is the science that, um, include the research and investigation on, uh, mechatronic engineering, software engineering, mechanical designs, electronics, and electronics engineering. So basically, a robot is a system, hardware system that have a software that allow the system to develop actions by itself, so somehow, in an automatic way. So that could be a roomba cleaning your house, a robotic arm camera in your house that ah, is connected to kind of system to put the blinds down or up. So every system that have some intelligence can be considered a robot. So not only, uh, human like robots, some intelligence.

>> Hanna:

What do you mean with that?

>> Miguel Olivarez Mendez:

Very good question. So when we say intelligence, um, that have, of course, a program that a human have done, that, uh, basically have some rules, some decision making. So if something happened, then there are some actuation. So when I say activation, it's referring to motors. So then there are song action based on song perception. So that's the important point, is that the robots need to understand the environment and then plan an action moving the motors of this robot or the systems in order to react somehow for an action. That, of course, have been programmed in advance. However, there are some new advanced artificial intelligence, and this go beyond in where you train your system with many possible solutions, and then the system decide which solution is more appropriate for this specific situation or a scenario that becomes really.

>> Hanna:

Challenging, I have to say. But also, for me already, from my perspective, what is challenging is making the robots perceive the world. And that's what you specialize in, because we think, yeah, we see distance, right? Uh, I see you. That's easy. But it's not really to make the robots understand what they are perceiving.

>> Carol Martinez:

Depending on what we want the robot to do. We have different type of perception algorithms. So if we focus on computer vision of understanding the images, then we have algorithms that, uh, tell us where the objects are, how fast they are moving. But then we also need to know, the robot itself needs to know what it is, its location with respect to, uh, some reference frames. Also, where is the goal? And then plan accordingly.

>> Hanna:

Okay. Looking at the world from the robot eyes must be really, really tricky. I think it's the moment when we can ask the pub quiz question that, remember, listeners only answer at the end of the podcast. So you need to listen carefully. And I think it's a question you can't really easily google and find somewhere on one of your websites. I hope so. Don't Google even if you can find it. Miguel, can you ask the question and remember, answer oddly at the end?

>> Miguel Olivarez Mendez:

The question of today podcast is, how many robots do we have in the space robotics laboratories?

>> Hanna:

And that means the laboratories that are in Kieshberg.

>> Miguel Olivarez Mendez:

Yeah, the laboratories are in Kirchberg. We are, uh, leading two different labs. One is the, uh, lunar lab. This is a, um, lunar analog facility. So that means it's, uh, a lab in where we try to simulate the visual appearance of the surface of the moon. The second lab is called the crog lab. The Crog lab is a lab to emulate orbital environments, again, visually and also the interaction of robots on orbit. And then, of course, we equip them with robots. In one case is the robots that, uh, have to be on the moon doing some specific actions. In the case of, uh, the zero g lab, we need to simulate the environment and simulate the movement of satellites in zero gravity. Of course, we don't have zero gravity in this room, but we emulate in different ways. And in this case, we have robots that help us to simulate this environment.

>> Hanna:

Before we move on to detailed discussion of those labs, I just wanted to ask one more question. I think important to set up the scene. Why do we need robots in space? Why are they necessary? Carol? Can't we all know, like in Star Trek, we just beam ourselves on some planet and we do the work, we go back.

>> Carol Martinez:

It's like here, usually it is said that we need robots to do 3d tasks, dirty tasks, tedious tasks, duly, and dangerous tasks. Space is a very dangerous environment for humans, so the astronaut hours are very expensive. So we cannot expect, for example, for the ISS to have astronauts doing all the time, extravecular activities. So there are tasks that can be automated with robots that where robots can sometimes perform better than humans, especially in the decision making process under stress conditions. So in those tasks, of course, robots can help a lot. Also for traveling for long distances, it's difficult to go from Earth to Mars only with human. This is something that we are planning to do, but we have already been doing it with robots, of course, to go there, explore first. Once we know that the conditions are proper for humans to go and visit, then we can go and explore. So it's like they go ahead and check how things are going. Yes. And then we can proceed with humans visiting or doing those tasks, but especially for repetitive tasks and tasks that are dangerous for humans.

>> Hanna:

But then it also means that we really need to program them well, don't we?

>> Carol Martinez:

Yes, we need to program them well. And also we need to be careful, especially when we talk about autonomy. It's always important to be always supervising them, to have supervisory control there in place, so that humans can make sure that the safety conditions are preserved.

>> Hanna:

That's interesting what you're saying, because I thought the drive is towards having fully autonomous robots exploring space.

>> Carol Martinez:

Yes, it's the ideal, is the final goal. But, um, it's always good to have the human in the loop. Of course, it's not fully controlling the robot, it's not sending every single commands but at least it's supervising at high level the different tasks. The final goal will be, of course, fully autonomous. But we need to go step by step. And once we know and we are sure that the algorithms are performing properly, then we can be going up into the autonomy levels that we have.

>> Hanna:

And one way of making sure for the algorithm to work well is actually to have the labs that you mentioned. Right. So place just around the corner in kieshbag to check whether your applications actually could potentially work because, well, you said it's an analog, right? It's an emulation of the environment.

>> Miguel Olivarez Mendez:

Exactly. So in the past, people were doing tests in simulated environment, but of course you can simulate some parts of the environment, but not all of them. So the point now is going to analog facilities that give you one step closer to what's going to happen over there and how to improve the system. So until now, the big risk of the emissions and also the huge investments basically reduce the autonomy of the robots because they wanted to be sure that, uh, all the money invested is not, uh, damaged with the wrong decision of an autonomous system. However, nowadays we are going to a new era of space activities with a lot of industry going on this direction. Of course, this is because they're identifying some business over there. And, uh, probably the people hearing this podcast will hear more about this on the space resources related podcast. But basically is that there are business over there. Before the space missions were for science, for knowing what's, uh, over there. Ah, now with the space resources, we identify that there are some business in this case we need to go beyond. And then this demand to do it faster, to get the business developed faster, is, uh, making necessary to do proper tests before sending and also increase the autonomy level. So having these type of labs is a, uh, cornerstone in order to do these, uh, next steps.

>> Hanna:

I was always wondering, when you have these labs, obviously you can't recreate the whole environment. You just need to choose something. So what are the things you can recreate? What are the things that you have to let go and say, well, sorry guys, not this time.

>> Miguel Olivarez Mendez:

Yeah, exactly. So also that depends on what things the research group want to focus. We as uh, research group cannot focus in all the activities that you need to solve in order to have robots moving nicely and not damaging the self or doing the job properly. There are some, for example, how the interaction of the wheels are with the environment. So this is called teramechanics. We don't focus on that because we don't have regolith, uh, simulan. This regolitz is the dust of the moon. We have, uh, basalt that has very similar visual property. So what we focus is on emulating the visual property. We don't emulate low gravity of the moon. So that could be done with cranes and then you have it up and down, or also the interaction of the system with the dust or the interaction with the soil. But important point in our case also because ain, as a founder of the group, I'm coming from autonomous navigation. That means how to understand the environment, how to plan movement, and then how to develop the control approach to follow this movement. And this is what we try to recreate. So it's the visual appearance, and of course, um, to simulate the visual appearance, so we simulate different kind of scenarios with more rocks, with less rocks, craters, uh, hills, or more flat landscapes that could be present in the moon.

>> Hanna:

And that's the photos that maybe some of you have seen. The additional thing, I think if you had the real simulant, you would not let anyone in, right? Because it's pretty expensive, isn't it?

>> Miguel Olivarez Mendez:

It's not because if it's expensive that it is also because we have 20 tons of basal, having 20 tons of, uh, regulate, uh, simulang is going to be costy, but it's also because it's very dangerous for humans to breathe. The regulate simulang particles are really, really small and they are crushed, so they are not rounded, uh, and then these, uh, very small particles can go on their limbs, uh, when you breathe it, and that can make some damage over there. So the point of using regular simulang is that we really need to have an environment protected. And, well, the lunar lab is in a basement, so that, uh, complicates things a lot, but also because we wanted to focus more on the visual appearance and also the healthy of the researchers. So we take the decision to use basalt with also a higher granularity.

>> Hanna:

So we talk visual lunar lab. So it comes to the perception, and you already mentioned a little bit, the perception algorithms, as you called them. So I want to know what are they and how do you actually just tell the robot? Okay, see and react, go and see.

>> Carol Martinez:

We usually talk about computer vision. This is one of the possible perception algorithms that making robots see.

>> Hanna:

And it means a lot of cameras.

>> Carol Martinez:

One camera, you can use as many cameras as you want, but then it comes at a cost, because then you need to be able to process all that information to strike meaningful information. That then is when of course, this is one of the constraint of the perception algorithms, because you can say okay. I will add as many sensors as I can, as many cameras as I can. But then you need to process. And, um, for perception, for robotics, you need to process in real time, because then the robot needs that information to act and to react. So that's one of the challenges here. That is that you need to be able to provide trustworthy information, but in real time, so that the robot can, uh, make decisions of what to do.

>> Hanna:

But then if you're talking real time and space and delay.

>> Carol Martinez:

Yeah, exactly. Okay. The delay is if we are planning of sending the information back to earth, process it, and then sending the commands back. So then it's not feasible, or it's feasible, but then the level of processing is going to be very basic. But if we are talking about, uh, processing many images, for example, then we need to think that all the processing should happen on board. So we should have computers on board that are reading that information, extracting relevant information, and sending the commands directly to the actuators of the robots. They're in place. And that's why when I was talking before about the human supervising, then the human is just supervising that the execution was successful or not. But the information that we will send back to Earth will be just basic, already processed, not the full image, not the raw image, to process it on the ground. That's the ideal situation. However, the computers that the robots usually have on board are not powerful enough to process all what we want. And then we need to be very selective. The area of computer vision, for example, for terrestrial robots, is moving very fast. We are able now to do amazing things, detecting objects, following objects at the same time that we also know what objects are there. This is a table. There are some trees here. The path is here. However, doing that for robots in space is going to be more challenging, especially for the processing time, for the computational power that is required for doing that.

>> Hanna:

So you mentioned what we are already able to do. And I think this, uh, computer vision is one of those fields that I guess most of the people have heard of, but it's one of also those fields that whenever I go to a conference and someone presents what we've just done, everyone is like, we're doing it already. So it's amazing. And I think the technology is just going crazy in advance. Although space is the next frontier, let's.

>> Carol Martinez:

Say, and also applying that into robotics is still moving slower than the computer vision field itself.

>> Hanna:

Why is it slow?

>> Carol Martinez:

As I said, because at the end, in robotics, we need interaction of other components. So in computer vision, if you are for example, only detecting images. You have a camera connected to a computer, and you want to recognize people. You can put high computational power on that computer, and it's able to do all this. But then transferring that model into the computer that is on board the robot, maybe you are not going to have as many gpus as that computer has on board the robot. And then I need not just to recognize the person, but, for example, if I want to approach Hannah, I have the robot that can recognize Hannah. But based on that, I want to send commands to the motors so that it approaches, so that interaction between what is perceiving plus the actuators plus the planning is what makes things harder because it has to happen very fast.

>> Hanna:

I was laughing the other day. My son was turning on his tablet, and he was like, it's not turning on. And it was just 1 minute, right? And I remember, I'm sure you remember those times when you would turn on the computer, go make yourself some coffee, and then come back, and maybe there was a chance that windows was loading or not. So things have changed a lot. But, um, Miguel, you mentioned that with AI, as we say, AI or maybe better machine learning or deep learning or whatever, it's going faster. Right. So also in this case, what, uh, Carol mentioned that the loop is faster thanks to the models we can create.

>> Miguel Olivarez Mendez:

Yeah, that's a, uh, very important point. What Carol mentioned, that things need to happen fast. So computer vision itself can use data already recorded, and then you run it in his supercomputers, or even the computers that you have in your office could do it, but not those one that are on board the robots. And then they need to process and react in a way that is, uh, a time that we are expecting to have this reaction. We cannot expect that the robot is coming to us, is detecting, and it taking 3 minutes to recognize yourself. Then we will leave. So, of course, machine learning is improving. These, uh, timings would said you train in advance. That takes more time, and you can do it offline in supercomputers, even much more powerful than those ones we can have on desk. And then later when the system learn, then you have this network that have the knowledge and that can process images much faster. And then we have these, uh, two phases. The training in advance that can take more time and request more computational power. And then once it's trained, basically how this is working on the robot itself. And then, of course, they're developing quite fast and then, uh, reducing the time. That is what humans nowadays are very used to. Uh, as you mentioned, your kid waiting for the tablet to get on in less than 1 minute?

>> Hanna:

Yes, totally. So, Carol, do you think that the future advance is more about the miniaturization? So the ability to actually put a really almost supercomputer on the robot, or it's rather just, uh, being able to process more data offline, as Miguel said, and then connect to the robot so it can work faster?

>> Carol Martinez:

I think the future is in miniature session. We need to have powerful computers that are smaller, that are lighter, and that can be carried on board. And, uh, we have already seen that progress. I remember when I started working on computer vision for Irel vehicles and the things that we can do. We had to process very small images because the computers cannot process images that were bigger. And now we are seeing drones that can really work with high resolution images. So we have already seen that progress, and I think it will come sooner, and we will have this capability of processing information from all these sensors at the same time and providing real time data, because it's not only visual data. When we talk about perception, we also talk about other sensors. For example, if we talk about robotic arms, we would like the robots also to have this sense of tosh that we have. We humans, we don't focus only on images. We can close our eyes and we can interact with the world without seeing. So we need to be able to incorporate also this information into the perception algorithm so that we can have a better understanding of the world and that we can also know how to handle the uncertainties of the world. Because with the visual data, you don't get all the information. For example, you don't know how heavy is that object. Maybe if you have already interacted with it, you have already carried on your hands, then you have this feeling and it's in your memory. But if it is the first time, you don't know it. Then you go, you approach, you touch it, you interact with the object. So being able to provide robot with those additional sensing modalities is what we aim at when working on perception for robotics, not only visual information.

>> Hanna:

Okay, I got it. Thank you very much. That makes absolute sense. And it's true. I mean, all the human senses are so difficult to actually program into robots. And we have this tendency when you see, as I said, robot running or doing some tricks, as I call them, or whatever, everybody gets excited. But I think that people should also be excited about the abilities that robotic arms provide and whatever else. And I remember when I was visiting SNT, I visited the lunar lab, and that's the excitement because it looks cool, right. For everyone, I think. And then you go to the zero g lab and you're a bit like, yeah. I don't have the knowledge to be amazed. So, Miguel, can you tell us a little bit why is it so great? And what are you actually emulating there? Because it's zero g. So, yeah, everybody could think, oh, so now I'm going to fly. No, that's not it. So what is it? Actually?

>> Miguel Olivarez Mendez:

Yeah, I completely understand your question. So we have to explain more into details, of course, zero g lab than Luna lab. Because Luna lab speaks by itself. But, um, yeah, zero g lab. The point is that we wanted to emulate orbital environments in different ways. Because we cannot have, uh, completely serug situation over there. So to emulate orbital environment, we have different ways to do it. One is having two robotic arms that are hanging on, uh, one on the ceiling and another on the wall. They are hanging both on robotic, uh, rail. So the point is that if we want to emulate the interaction or the approaching of two satellites. So one satellite approaching an asteroid, uh, or any kind of, uh, space debris, then what we do is we put at, ah, the n effector of one of the robotic arm. We put the perception sensors. It could be cameras, um, infrared cameras, also leaders or lasers. And in the other one, we put the mockup of the satellite, asteroid or debris. In that way, the room is, uh, painted in black. We have also a sun simulator also to simulate dalbedo. That is, the reflection of the light on the earth or any other celestial body. And then we simulate the illumination. In that way, we can capture a lot of, uh, images. And train our machine learning algorithms. At the same time, as, uh, Carol was mentioning, we need this information in order to move in, also to plan an action. And in that way, we can get in the distance where we are from the satellite, the other satellite, or the mockup of the asteroid of the debris. The other satellite can plan the trajectory to go there. And to do the docking or rendezvous. Or also to interact with other tools like, uh, another robotic arm that emulates the robotic arm that could be on the satellite. Okay, this is good to emulate the environment visually. And also for the control. But then someone can ask, okay, yeah, but the rendezvous, the docking is not going to be completely realistic. Because the system, the satellites and the mockup are hanging, uh, on robotic arms. That have their own movement and planned movement. So how do you emulate the reaction of bodies on an orbital environment. In where there are zero gravity? Okay, so these things we specifically emulate because we have a, uh, flat flow made by epoxy, and we develop a, uh, floating platform. The floating platform at its, uh, name says they are floating. How we make it floating. We have, uh, air compressed system that is blowing air against the floor and then generate non friction reaction between the floor and the floating platform. In that way, we have two of them. When they are touching each other, then it's generating a force, and this force signed, this is a frictionless flow, then it will react similarly like an orbital environment. But of course, not in emulate this in. That's for the people to understand a little bit more clear how this is, uh, the action and reaction of these, uh, systems. Robotic systems, is similar to the, um, air hockey system. The games that we play with table, that is blowing air, and then the disk is floating. So it's the same thing. But in this case, we don't blow air, uh, with the flow, but we blow air with the floating platform. In addition, we have eight thrusters, eight nasal, that's blowing air in different directions, and that emulate the propulsion system of a satellite. So having two of them, we can plan the movement, of course, in plane of the floor with considering also their rotation and then approach. And when they touch, the reaction would be similar that if they were in, uh, orbital environment with zero gravity or low gravity.

>> Hanna:

Now, next time I'm going to visit, I will be more excited than I promise, because it's really not easy, uh, at least for someone who is not an expert in robotics and the emulation of zero gravity. For sure. We're slowly running out of time, and I have so many questions here to ask. I don't know. We have to meet once again. But I still wanted to discuss debris, because when I had Catherine Hadler from Esrik in the podcast, we were focusing more on space resources. There was no chance for debris when I had your colleague, uh, Simeon from SNT satellites. Yes, but not that much debris. And I know, Carol, you also look at that. So just tell me, what is your research about when it comes to debris and what can we do about it? Should we remove it, actually, or should it just stay there? Maybe.

>> Carol Martinez:

Well, we should remove it. There are two main reasons. One is, of course, to prevent the risk of collision. This is a serious concern that is going bigger, especially because nowadays, every single day, we have more satellites in orbit. So there is going to be a point where the collisions will increase. So we need to protect the satellites that are active, that are out there. We need also to avoid collisions between the non functioning satellites. As well, because they are still there and also for sustainability reasons. There is going to be a point that maybe we are not going to have space in space, you see? So then we need to aim at, uh, doing something in the area for space debris removal. There are different initiatives, ones that aim at removing the debris that is there. But of course, the new satellites that are being launched should also contemplate what to do after the end of their lifetime. So what we are doing at DSNT, in the research group, we have two projects. One is focused on developing a capturing mechanism for active space debris removal. We are developing a concept that includes the use of gecko materials to attach to the debris. We also incorporate something that is called active and passive compliance, because we need to be careful when getting into contact with the debris, to not produce more debris. And in addition to this, we are developing all the capabilities of the CRG lab. You were talking before about the lab, because we need to test this mechanism in the lab to make sure that the system is working, that we can actually activate the different components, what it is required, the capabilities of being able to attach to different types of debris. So in order to do this, we need to develop the capabilities of the lab that allow us to do something that is called software in the loop simulations and hardware in the loop simulations, where we will connect a simulation environment where we can actually have the orbital environment, and we send it to the real robots that we have in the crowj lab, that they will emulate the motion that the simulator is commanding. And, uh, we can see that when they are approaching, if they get in touch. So those forces that you receive when the two bodies are in contact are, uh, transmitted back into the simulator, and the simulator is processing the data and send it back to the robots. So what we do in the lab is that connection between the hardware with the mockups that Miguel mentioned before, with the software. That is where we actually have the orbital condition, the orbital environment.

>> Hanna:

And that's the first research project. But you said that combination.

>> Carol Martinez:

Yes, they are connected because in one we are focusing on prototyping, on designing the concept and, uh, designing the mechanism and prototyping it. And in the other one is how to test it. So they are connected.

>> Hanna:

Very interesting. And fingers crossed for having some, uh, Luxembourg based solution for that, because, well, it's a race as well. There are quite a few different companies taking care of, you know, with ESA looking right now, um, more and more at sustainability, I think it's important that we also are there and doing something. So I think, uh, what remains is probably to answer the pubquest question. And later on, I will also ask Miguel to tell us a little bit about the conference that is coming pretty soon. So. Well, first of all, the pubquest question. So, Miguel, if you can remind us what the question was, and then, uh, together we will try to answer it, I think.

>> Miguel Olivarez Mendez:

So the question was, uh, how many robots do we have in the space robotic research group? So this, uh, of course, is something that an answer could be valid now, but eventually, in few months or years, uh, it will change, because, of course, we are increasing, increasing, let me tell you. We have, uh, ten Leo robots, so small robots that people from the master are also using. In the classes of space robotics. We have one bigger, uh, lunar rover called robotnik. We have one robotic arm that sometimes is connected to the robotnik rover, the big rover. So that makes, uh, twelve. We have two robotic arms in the syrogy lab 13. We have two floating platform 15. And then for other classes that, uh, Carol is teaching at the space master on manipulation in space, we have, uh, four small robotic arms.

>> Hanna:

Remember that if ever there are any outreach activities, right now there is an amazing, um, exhibition at the Museum of Natural History. You will probably be able to participate in some things where your group is also present. And whenever there's a science festival, whatever else, if you see a rover somewhere there, that's probably some of the people from your group. And of course, why not go for the master in space technology and business? I mean, that sounds so cool and so interesting. And, uh, last thing, for the people who are a little bit, maybe more in the field, uh, there is a first time ever conference organized this year, I think, and it's called Ice Sparrow.

>> Miguel Olivarez Mendez:

Yeah, Hannah. Uh, this is right. So I'm the founder and general chair of this new conference, uh, this, uh, international conference. We have the support of, uh, many, many people around the world, from NASA, also people from Australia, Europe, Asia. This is the new international conference on Space Robotics. It's going to be hosted in Luxembourg from the 24th to 27th of, uh, June this year. And, uh, we are looking forward to gathering not only researchers, but also industry and people from different organizations at the Abe Nimster in this summer. So, looking forward to hear what people want to share there. We still have the call for paper open until the 10 February.

>> Hanna:

Yes. So if you're listening to the podcast, you still have a little bit of time. Not that much, but it's rushed because 10 February. And I know that Miguel was very nice to kind of. I mean, not you, but the organizers put this way to move the deadline. So please, now is the last time. Do write your paper and sign up for the conference if you can.

>> Miguel Olivarez Mendez:

Well, we can also give fresh news that this is still not, uh, officially announced. We are going to have a, ah, call for posters. So there's the latest breaking research or activities in order that, uh, we open this also for people who still don't have the big results of the experiments, but the first, uh, and very promising results that will be open after the deadline of the papers.

>> Hanna:

Okay, so I got a scoop as well. Yay. Anyway, thank you so much. And yes, hope to discuss in a couple of years time once the robots become in the hundreds. And we can talk where s t is and probably somewhere else, because you have actually space problems in the sense of just not enough space for the researchers. Right. So if you have more robots, then probably there's going to be, we still.

>> Miguel Olivarez Mendez:

Have more humans than robots. That's always good.

>> Hanna:

That's exactly what Carol said, right? The human in the loop. That's always important. Thank you very much today, Carol and Miguel M for coming and telling us all about your activity.

>> Carol Martinez:

Thank you Hannah, for the invitation.

>> Miguel Olivarez Mendez:

Thank you, Hannah. It was a pleasure.

>> Hanna:

And this is it for today. Don't forget to subscribe, to follow us to listen to all the previous episodes. There's a lot, uh, going on here in Luxembourg in the space industry, so you can check some previous episodes. There was one also about computer vision with Jamila Uada from SNT. So listen to that as well, because we didn't have time to discuss computer vision in more details today. And of course write to us on all the social media, suggest guests, contact me, and thank you very much for listening. This was Sylux and my name is Hannah Shamashk.

People on this episode