Publications
2025
- NeurIPSOptimism via intrinsic rewards: Scalable and principled exploration for model-based reinforcement learningIn 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities, 2025
- ICLRActSafe: Active Exploration with Safety Constraints for Reinforcement LearningIn The Thirteenth International Conference on Learning Representations, 2025
2024
- NeurIPSWhen to sense and control? a time-adaptive approach for continuous-time RLAdvances in Neural Information Processing Systems, 2024
- NeurIPSNeorl: Efficient exploration for nonepisodic rlAdvances in Neural Information Processing Systems, 2024
- NeurIPSTransductive active learning: Theory and applicationsAdvances in Neural Information Processing Systems, 2024
- IROSBridging the sim-to-real gap with Bayesian inferenceIn 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024
- ICLR Workshop
2023
- NeurIPSEfficient exploration in continuous-time model-based reinforcement learningAdvances in Neural Information Processing Systems, 2023
- NeurIPSOptimistic active exploration of dynamical systemsAdvances in Neural Information Processing Systems, 2023
2021
- NeurIPSDistributional gradient matching for learning uncertain neural dynamics modelsAdvances in Neural Information Processing Systems, 2021
- L4DCLearning stabilizing controllers for unstable linear quadratic regulators from a single trajectoryIn Learning for dynamics and control, 2021