Topic Keywords

[ $\ell_1$ norm ] [ $f-$divergence ] [ 3D Convolution ] [ 3D deep learning ] [ 3D generation ] [ 3d point cloud ] [ 3D Reconstruction ] [ 3D scene understanding ] [ 3D shape representations ] [ 3D shapes learning ] [ 3D vision ] [ 3D Vision ] [ abstract reasoning ] [ abstract rules ] [ Acceleration ] [ accuracy ] [ acoustic condition modeling ] [ Action localization ] [ action recognition ] [ activation maximization ] [ activation strategy. ] [ Active learning ] [ Active Learning ] [ AdaBoost ] [ adaptive heavy-ball methods ] [ Adaptive Learning ] [ adaptive methods ] [ adaptive optimization ] [ ADMM ] [ Adversarial Accuracy ] [ Adversarial Attack ] [ Adversarial Attacks ] [ adversarial attacks/defenses ] [ Adversarial computer programs ] [ Adversarial Defense ] [ Adversarial Example Detection ] [ Adversarial Examples ] [ Adversarial Learning ] [ Adversarial Machine Learning ] [ adversarial patch ] [ Adversarial robustness ] [ Adversarial Robustness ] [ Adversarial training ] [ Adversarial Training ] [ Adversarial Transferability ] [ aesthetic assessment ] [ affine parameters ] [ age estimation ] [ Aggregation Methods ] [ AI for earth science ] [ ALFRED ] [ Algorithm ] [ algorithmic fairness ] [ Algorithmic fairness ] [ Algorithms ] [ alignment ] [ alignment of semantic and visual space ] [ amortized inference ] [ Analogies ] [ annotation artifacts ] [ anomaly-detection ] [ Anomaly detection with deep neural networks ] [ anonymous walk ] [ appearance transfer ] [ approximate constrained optimization ] [ approximation ] [ Approximation ] [ Architectures ] [ argoverse ] [ Artificial Integlligence ] [ ASR ] [ assistive technology ] [ associative memory ] [ Associative Memory ] [ asynchronous parallel algorithm ] [ Atari ] [ Attention ] [ Attention Mechanism ] [ Attention Modules ] [ attractors ] [ attributed walks ] [ Auction Theory ] [ audio understanding ] [ Audio-Visual ] [ audio visual learning ] [ audio-visual representation ] [ audio-visual representation learning ] [ Audio-visual sound separation ] [ audiovisual synthesis ] [ augmented deep reinforcement learning ] [ autodiff ] [ Autoencoders ] [ automated data augmentation ] [ automated machine learning ] [ automatic differentiation ] [ AutoML ] [ autonomous learning ] [ autoregressive language model ] [ Autoregressive Models ] [ AutoRL ] [ auxiliary information ] [ auxiliary latent variable ] [ Auxiliary Learning ] [ auxiliary task ] [ Average-case Analysis ] [ aversarial examples ] [ avoid knowledge leaking ] [ backdoor attack ] [ Backdoor Attacks ] [ Backdoor Defense ] [ Backgrounds ] [ backprop ] [ back translation ] [ backward error analysis ] [ bagging ] [ batchnorm ] [ Batch Normalization ] [ batch reinforcement learning ] [ Batch Reinforcement Learning ] [ batch selection ] [ Bayesian ] [ Bayesian classification ] [ Bayesian inference ] [ Bayesian Inference ] [ Bayesian networks ] [ Bayesian Neural Networks ] [ behavior cloning ] [ belief-propagation ] [ Benchmark ] [ benchmarks ] [ benign overfitting ] [ bert ] [ BERT ] [ beta-VAE ] [ better generalization ] [ biased sampling ] [ biases ] [ Bias in Language Models ] [ bidirectional ] [ bilevel optimization ] [ Bilinear games ] [ Binary Embeddings ] [ Binary Neural Networks ] [ binaural audio ] [ binaural speech ] [ biologically plausible ] [ Biometrics ] [ bisimulation ] [ Bisimulation ] [ bisimulation metrics ] [ bit-flip ] [ bit-level sparsity ] [ blind denoising ] [ blind spots ] [ block mdp ] [ boosting ] [ bottleneck ] [ bptt ] [ branch and bound ] [ Brownian motion ] [ Budget-Aware Pruning ] [ Budget constraints ] [ Byzantine resilience ] [ Byzantine SGD ] [ CAD modeling ] [ calibration ] [ Calibration ] [ calibration measure ] [ cancer research ] [ Capsule Networks ] [ Catastrophic forgetting ] [ Catastrophic Forgetting ] [ Causal Inference ] [ Causality ] [ Causal network ] [ certificate ] [ certified defense ] [ Certified Robustness ] [ challenge sets ] [ change of measure ] [ change point detection ] [ channel suppressing ] [ Channel Tensorization ] [ Channel-Wise Approximated Activation ] [ Chaos ] [ chebyshev polynomial ] [ checkpointing ] [ Checkpointing ] [ chemistry ] [ CIFAR ] [ Classification ] [ class imbalance ] [ clean-label ] [ Clustering ] [ Clusters ] [ CNN ] [ CNNs ] [ Code Compilation ] [ Code Representations ] [ Code Structure ] [ code summarization ] [ Code Summarization ] [ Cognitively-inspired Learning ] [ cold posteriors ] [ collaborative learning ] [ Combinatorial optimization ] [ common object counting ] [ commonsense question answering ] [ Commonsense Reasoning ] [ Communication Compression ] [ co-modulation ] [ complete verifiers ] [ complex query answering ] [ Composition ] [ compositional generalization ] [ compositional learning ] [ compositional task ] [ Compressed videos ] [ Compressing Deep Networks ] [ Compression ] [ computation ] [ computational biology ] [ Computational Biology ] [ computational complexity ] [ Computational imaging ] [ Computational neuroscience ] [ Computational resources ] [ computer graphics ] [ Computer Vision ] [ concentration ] [ Concentration of Measure ] [ Concept-based Explanation ] [ concept drift ] [ Concept Learning ] [ conditional expectation ] [ Conditional GANs ] [ Conditional Generation ] [ Conditional generative adversarial networks ] [ conditional layer normalization ] [ Conditional Neural Processes ] [ Conditional Risk Minimization ] [ Conditional Sampling ] [ conditional text generation ] [ Conferrability ] [ confidentiality ] [ conformal inference ] [ conformal prediction ] [ conjugacy ] [ conservation law ] [ consistency ] [ consistency training ] [ Consistency Training ] [ constellation models ] [ constrained beam search ] [ Constrained optimization ] [ constrained RL ] [ constraints ] [ constraint satisfaction ] [ contact tracing ] [ Contextual Bandits ] [ Contextual embedding space ] [ Continual learning ] [ Continual Learning ] [ continuation method ] [ continuous and scalar conditions ] [ continuous case ] [ Continuous Control ] [ continuous convolution ] [ continuous games ] [ continuous normalizing flow ] [ continuous time ] [ Continuous-time System ] [ continuous treatment effect ] [ contrastive divergence ] [ Contrastive learning ] [ Contrastive Learning ] [ Contrastive Methods ] [ contrastive representation learning ] [ control barrier function ] [ controlled generation ] [ Controlled NLG ] [ Convergence ] [ Convergence Analysis ] [ convex duality ] [ Convex optimization ] [ ConvNets ] [ convolutional kernel methods ] [ Convolutional Layer ] [ convolutional models ] [ Convolutional Networks ] [ copositive programming ] [ corruptions ] [ COST ] [ Counterfactual inference ] [ counterfactuals ] [ Counterfactuals ] [ covariant neural networks ] [ covid-19 ] [ COVID-19 ] [ Cross-domain ] [ cross-domain few-shot learning ] [ cross-domain video generation ] [ cross-episode attention ] [ cross-fitting ] [ cross-lingual pretraining ] [ Cryptographic inference ] [ cultural transmission ] [ Curriculum Learning ] [ curse of memory ] [ curvature estimates ] [ custom voice ] [ cycle-consistency regularization ] [ cycle-consistency regularizer ] [ DAG ] [ DARTS stability ] [ Data augmentation ] [ Data Augmentation ] [ data cleansing ] [ Data-driven modeling ] [ data-efficient learning ] [ data-efficient RL ] [ Data Flow ] [ data labeling ] [ data parallelism ] [ Data Poisoning ] [ Data Protection ] [ Dataset ] [ dataset bias ] [ dataset compression ] [ dataset condensation ] [ dataset corruption ] [ dataset distillation ] [ dataset summarization ] [ data structures ] [ debiased training ] [ debugging ] [ Decentralized Optimization ] [ decision boundary geometry ] [ decision trees ] [ declarative knowledge ] [ deep-anomaly-detection ] [ Deep Architectures ] [ Deep denoising priors ] [ deep embedding ] [ Deep Ensembles ] [ deep equilibrium models ] [ Deep Equilibrium Models ] [ Deepfake ] [ deep FBSDEs ] [ Deep Gaussian Processes ] [ Deep generative model ] [ Deep generative modeling ] [ Deep generative models ] [ deeplearning ] [ Deep learning ] [ Deep Learning ] [ deep learning dynamics ] [ Deep Learning Theory ] [ deep network training ] [ deep neural network ] [ deep neural networks. ] [ Deep Neural Networks ] [ deep one-class classification ] [ deep Q-learning ] [ Deep reinforcement learning ] [ Deep Reinforcement Learning ] [ deep ReLU networks ] [ Deep residual neural networks ] [ deep RL ] [ deep sequence model ] [ deepset ] [ Deep Sets ] [ Deformation Modeling ] [ delay ] [ Delay differential equations ] [ denoising score matching ] [ Dense Retrieval ] [ Density estimation ] [ Density Estimation ] [ Density ratio estimation ] [ dependency based method ] [ deployment-efficiency ] [ depression ] [ depth separation ] [ descent ] [ description length ] [ determinantal point processes ] [ Device Placement ] [ dialogue state tracking ] [ differentiable optimization ] [ Differentiable physics ] [ Differentiable Physics ] [ Differentiable program generator ] [ differentiable programming ] [ Differentiable rendering ] [ Differentiable simulation ] [ differential dynamica programming ] [ differential equations ] [ Differential Geometry ] [ differentially private deep learning ] [ Differential Privacy ] [ diffusion probabilistic models ] [ diffusion process ] [ dimension ] [ Directed Acyclic Graphs ] [ Dirichlet form ] [ Discrete Optimization ] [ discretization error ] [ disentangled representation learning ] [ Disentangled representation learning ] [ Disentanglement ] [ distance ] [ Distillation ] [ distinct elements ] [ Distributed ] [ distributed deep learning ] [ distributed inference ] [ Distributed learning ] [ distributed machine learning ] [ Distributed ML ] [ Distributed Optimization ] [ distributional robust optimization ] [ distribution estimation ] [ distribution shift ] [ diverse strategies ] [ diverse video generation ] [ Diversity denoising ] [ Diversity Regularization ] [ DNN ] [ DNN compression ] [ document analysis ] [ document classification ] [ document retrieval ] [ domain adaptation theory ] [ Domain Adaption ] [ Domain Generalization ] [ domain randomization ] [ Domain Translation ] [ double descent ] [ Double Descent ] [ doubly robustness ] [ Doubly-weighted Laplace operator ] [ Dropout ] [ drug discovery ] [ Drug discovery ] [ dst ] [ Dual-mode ASR ] [ Dueling structure ] [ Dynamical Systems ] [ dynamic computation graphs ] [ dynamics ] [ dynamics prediction ] [ dynamic systems ] [ Early classification ] [ Early pruning ] [ early stopping ] [ EBM ] [ Edit ] [ EEG ] [ effective learning rate ] [ Efficiency ] [ Efficient Attention Mechanism ] [ efficient deep learning ] [ Efficient Deep Learning ] [ Efficient Deep Learning Inference ] [ Efficient ensembles ] [ efficient inference ] [ efficient inference methods ] [ Efficient Inference Methods ] [ EfficientNets ] [ efficient network ] [ Efficient Networks ] [ Efficient training ] [ Efficient Training ] [ efficient training and inference. ] [ egocentric ] [ eigendecomposition ] [ Eigenspectrum ] [ ELBO ] [ electroencephalography ] [ EM ] [ Embedding Models ] [ Embedding Size ] [ Embodied Agents ] [ embodied vision ] [ emergent behavior ] [ empirical analysis ] [ Empirical Game Theory ] [ empirical investigation ] [ Empirical Investigation ] [ empirical study ] [ empowerment ] [ Encoder layer fusion ] [ end-to-end entity linking ] [ End-to-End Object Detection ] [ Energy ] [ Energy-Based GANs ] [ energy based model ] [ energy-based model ] [ Energy-based model ] [ energy based models ] [ Energy-based Models ] [ Energy Based Models ] [ Energy-Based Models ] [ Energy Score ] [ ensemble ] [ Ensemble ] [ ensemble learning ] [ ensembles ] [ Ensembles ] [ entity disambiguation ] [ entity linking ] [ entity retrieval ] [ entropic algorithms ] [ Entropy Maximization ] [ Entropy Model ] [ entropy regularization ] [ epidemiology ] [ episode-level pretext task ] [ episodic training ] [ equilibrium ] [ equivariant ] [ equivariant neural network ] [ ERP ] [ Evaluation ] [ evaluation of interpretability ] [ Event localization ] [ evolution ] [ Evolutionary algorithm ] [ Evolutionary Algorithm ] [ Evolutionary Algorithms ] [ Excess risk ] [ experience replay buffer ] [ experimental evaluation ] [ Expert Models ] [ Explainability ] [ explainable ] [ Explainable AI ] [ Explainable Model ] [ explaining decision-making ] [ explanation method ] [ explanations ] [ Explanations ] [ Exploration ] [ Exponential Families ] [ exponential tilting ] [ exposition ] [ external memory ] [ Extrapolation ] [ extremal sector ] [ facial recognition ] [ factor analysis ] [ factored MDP ] [ Factored MDP ] [ fairness ] [ Fairness ] [ faithfulness ] [ fast DNN inference ] [ fast learning rate ] [ fast-mapping ] [ fast weights ] [ FAVOR ] [ Feature Attribution ] [ feature propagation ] [ features ] [ feature visualization ] [ Feature Visualization ] [ Federated learning ] [ Federated Learning ] [ Few Shot ] [ few-shot concept learning ] [ few-shot domain generalization ] [ Few-shot learning ] [ Few Shot Learning ] [ fine-tuning ] [ finetuning ] [ Fine-tuning ] [ Finetuning ] [ fine-tuning stability ] [ Fingerprinting ] [ First-order Methods ] [ first-order optimization ] [ fisher ratio ] [ flat minima ] [ Flexibility ] [ flow graphs ] [ Fluid Dynamics ] [ Follow-the-Regularized-Leader ] [ Formal Verification ] [ forward mode ] [ Fourier Features ] [ Fourier transform ] [ framework ] [ Frobenius norm ] [ from-scratch ] [ frontend ] [ fruit fly ] [ fully-connected ] [ Fully-Connected Networks ] [ future frame generation ] [ future link prediction ] [ fuzzy tiling activation function ] [ Game Decomposition ] [ Game Theory ] [ GAN ] [ GAN compression ] [ GANs ] [ Garbled Circuits ] [ Gaussian Copula ] [ Gaussian Graphical Model ] [ Gaussian Isoperimetric Inequality ] [ Gaussian mixture model ] [ Gaussian process ] [ Gaussian Process ] [ Gaussian Processes ] [ gaussian process priors ] [ GBDT ] [ generalisation ] [ Generalization ] [ Generalization Bounds ] [ generalization error ] [ Generalization Measure ] [ Generalization of Reinforcement Learning ] [ generalized ] [ generalized Girsanov theorem ] [ Generalized PageRank ] [ Generalized zero-shot learning ] [ Generation ] [ Generative Adversarial Network ] [ Generative Adversarial Networks ] [ generative art ] [ Generative Flow ] [ Generative Model ] [ Generative modeling ] [ Generative Modeling ] [ generative modelling ] [ Generative Modelling ] [ Generative models ] [ Generative Models ] [ genetic programming ] [ Geodesic-Aware FC Layer ] [ geometric ] [ Geometric Deep Learning ] [ G-invariance regularization ] [ global ] [ global optima ] [ Global Reference ] [ glue ] [ GNN ] [ GNNs ] [ goal-conditioned reinforcement learning ] [ goal-conditioned RL ] [ goal reaching ] [ gradient ] [ gradient alignment ] [ Gradient Alignment ] [ gradient boosted decision trees ] [ gradient boosting ] [ gradient decomposition ] [ Gradient Descent ] [ gradient descent-ascent ] [ gradient flow ] [ Gradient flow ] [ gradient flows ] [ gradient redundancy ] [ Gradient stability ] [ Grammatical error correction ] [ Granger causality ] [ Graph ] [ graph classification ] [ graph coarsening ] [ Graph Convolutional Network ] [ Graph Convolutional Neural Networks ] [ graph edit distance ] [ Graph Generation ] [ Graph Generative Model ] [ graph-level prediction ] [ graph networks ] [ Graph neural network ] [ Graph Neural Network ] [ Graph neural networks ] [ Graph Neural Networks ] [ Graph pooling ] [ graph representation learning ] [ Graph representation learning ] [ Graph Representation Learning ] [ graph shift operators ] [ graph-structured data ] [ graph structure learning ] [ Greedy Learning ] [ grid cells ] [ grounding ] [ group disparities ] [ group equivariance ] [ Group Equivariance ] [ Group Equivariant Convolution ] [ group equivariant self-attention ] [ group equivariant transformers ] [ group sparsity ] [ Group-supervised learning ] [ gumbel-softmax ] [ Hamiltonian systems ] [ hard-label attack ] [ hard negative mining ] [ hard negative sampling ] [ Hardware-Aware Neural Architecture Search ] [ Harmonic Analysis ] [ harmonic distortion analysis ] [ healthcare ] [ Healthcare ] [ heap allocation ] [ Hessian matrix ] [ Heterogeneity ] [ Heterogeneous ] [ heterogeneous data ] [ Heterogeneous data ] [ Heterophily ] [ heteroscedasticity ] [ heuristic search ] [ hidden-parameter mdp ] [ hierarchical contrastive learning ] [ Hierarchical Imitation Learning ] [ Hierarchical Multi-Agent Learning ] [ Hierarchical Networks ] [ Hierarchical Reinforcement Learning ] [ Hierarchy-Aware Classification ] [ high-dimensional asymptotics ] [ high-dimensional statistic ] [ high-resolution video generation ] [ hindsight relabeling ] [ histogram binning ] [ historical color image classification ] [ HMC ] [ homomorphic encryption ] [ Homophily ] [ Hopfield layer ] [ Hopfield networks ] [ Hopfield Networks ] [ human-AI collaboration ] [ human cognition ] [ human-computer interaction ] [ human preferences ] [ human psychophysics ] [ humans in the loop ] [ hybrid systems ] [ Hyperbolic ] [ hyperbolic deep learning ] [ Hyperbolic Geometry ] [ hypercomplex representation learning ] [ hypergradients ] [ Hypernetworks ] [ hyperparameter ] [ Hyperparameter Optimization ] [ Hyper-Parameter Optimization ] [ HYPERPARAMETER OPTIMIZATION ] [ Image Classification ] [ image completion ] [ Image compression ] [ Image Editing ] [ Image Generation ] [ Image manipulation ] [ Image Modeling ] [ ImageNet ] [ image reconstruction ] [ Image segmentation ] [ Image Synthesis ] [ image-to-action learning ] [ Image-to-Image Translation ] [ image translation ] [ image warping ] [ imbalanced learning ] [ Imitation Learning ] [ Impartial Learning ] [ implicit bias ] [ Implicit Bias ] [ Implicit Deep Learning ] [ implicit differentiation ] [ implicit functions ] [ implicit neural representations ] [ Implicit Neural Representations ] [ Implicit Representation ] [ Importance Weighting ] [ impossibility ] [ incoherence ] [ Incompatible Environments ] [ Incremental Tree Transformations ] [ independent component analysis ] [ indirection ] [ Individual mediation effects ] [ Inductive Bias ] [ inductive biases ] [ inductive representation learning ] [ infinitely wide neural network ] [ Infinite-Width Limit ] [ infinite-width networks ] [ influence functions ] [ Influence Functions ] [ Information bottleneck ] [ Information Bottleneck ] [ Information Geometry ] [ information-theoretical probing ] [ Information theory ] [ Information Theory ] [ Initialization ] [ input-adaptive multi-exit neural networks ] [ input convex neural networks ] [ input-convex neural networks ] [ InstaHide ] [ Instance adaptation ] [ instance-based label noise ] [ Instance learning ] [ Instance-wise Learning ] [ Instrumental Variable Regression ] [ integral probability metric ] [ intention ] [ interaction networks ] [ Interactions ] [ interactive fiction ] [ Internet of Things ] [ Interpolation Peak ] [ Interpretability ] [ interpretable latent representation ] [ Interpretable Machine Learning ] [ interpretable policy learning ] [ in-the-wild data ] [ Intrinsically Motivated Reinforcement Learning ] [ Intrinsic Motivation ] [ intrinsic motivations ] [ Intrinsic Reward ] [ Invariance and Equivariance ] [ invariance penalty ] [ invariances ] [ Invariant and equivariant deep networks ] [ Invariant Representations ] [ invariant risk minimization ] [ Invariant subspaces ] [ inverse graphics ] [ Inverse reinforcement learning ] [ Inverse Reinforcement Learning ] [ Inverted Index ] [ irl ] [ IRM ] [ irregularly spaced time series ] [ irregular-observed data modelling ] [ isometric ] [ Isotropy ] [ iterated learning ] [ iterative training ] [ JEM ] [ Johnson-Lindenstrauss Transforms ] [ kernel ] [ Kernel Learning ] [ kernel method ] [ kernel-ridge regression ] [ kernels ] [ keypoint localization ] [ Knowledge distillation ] [ Knowledge Distillation ] [ Knowledge factorization ] [ Knowledge Graph Reasoning ] [ knowledge uncertainty ] [ Kullback-Leibler divergence ] [ Kurdyka-Łojasiewicz geometry ] [ label noise robustness ] [ Label Representation ] [ Label shift ] [ label smoothing ] [ Langevin dynamics ] [ Langevin sampling ] [ Language Grounding ] [ Language Model ] [ Language modeling ] [ Language Modeling ] [ Language Modelling ] [ Language Model Pre-training ] [ language processing ] [ language-specific modeling ] [ Laplace kernel ] [ Large-scale ] [ Large-scale Deep Learning ] [ large scale learning ] [ Large-scale Machine Learning ] [ large-scale pre-trained language models ] [ large-scale training ] [ large vocabularies ] [ Last-iterate Convergence ] [ Latency-aware Neural Architecture Search ] [ Latent Simplex ] [ latent space of GANs ] [ Latent Variable Models ] [ lattices ] [ Layer order ] [ layerwise sparsity ] [ learnable ] [ learned algorithms ] [ Learned compression ] [ learned ISTA ] [ Learning ] [ learning action representations ] [ learning-based ] [ learning dynamics ] [ Learning Dynamics ] [ Learning in Games ] [ learning mechanisms ] [ Learning physical laws ] [ Learning Theory ] [ Learning to Hash ] [ learning to optimize ] [ Learning to Optimize ] [ learning to rank ] [ Learning to Rank ] [ learning to teach ] [ learning with noisy labels ] [ Learning with noisy labels ] [ library ] [ lifelong ] [ Lifelong learning ] [ Lifelong Learning ] [ lifted inference ] [ likelihood-based models ] [ likelihood-free inference ] [ limitations ] [ limited data ] [ linear bandits ] [ Linear Convergence ] [ linear estimator ] [ Linear Regression ] [ linear terms ] [ linformer ] [ Lipschitz constants ] [ Lipschitz constrained networks ] [ Local Explanations ] [ locality sensitive hashing ] [ Locally supervised training ] [ local Rademacher complexity ] [ log-concavity ] [ Logic ] [ Logic Rules ] [ logsignature ] [ Long-Tailed Recognition ] [ long-tail learning ] [ Long-term dependencies ] [ long-term prediction ] [ long-term stability ] [ loss correction ] [ Loss function search ] [ Loss Function Search ] [ lossless source compression ] [ Lottery Ticket ] [ Lottery Ticket Hypothesis ] [ lottery tickets ] [ low-dimensional structure ] [ lower bound ] [ lower bounds ] [ Low-latency ASR ] [ low precision training ] [ low rank ] [ low-rank approximation ] [ low-rank tensors ] [ L-smoothness ] [ LSTM ] [ Lyapunov Chaos ] [ Machine learning ] [ Machine Learning ] [ machine learning for code ] [ Machine Learning for Robotics ] [ Machine Learning (ML) for Programming Languages (PL)/Software Engineering (SE) ] [ machine learning systems ] [ Machine translation ] [ Machine Translation ] [ magnitude-based pruning ] [ Manifold clustering ] [ Manifolds ] [ Many-task ] [ mapping ] [ Markov chain Monte Carlo ] [ Markov Chain Monte Carlo ] [ Markov jump process ] [ Masked Reconstruction ] [ mathematical reasoning ] [ Matrix and Tensor Factorization ] [ matrix completion ] [ matrix decomposition ] [ Matrix Factorization ] [ max-margin ] [ MCMC ] [ MCMC sampling ] [ mean estimation ] [ mean-field dynamics ] [ mean separation ] [ Mechanism Design ] [ medical time series ] [ mel-filterbanks ] [ memorization ] [ Memorization ] [ Memory ] [ memory efficient ] [ memory efficient training ] [ Memory Mapping ] [ memory optimized training ] [ Memory-saving ] [ mesh ] [ Message Passing ] [ Message Passing GNNs ] [ meta-gradients ] [ Meta-learning ] [ Meta Learning ] [ Meta-Learning ] [ Metric Surrogate ] [ minimax optimal rate ] [ Minimax Optimization ] [ minimax risk ] [ Minmax ] [ min-max optimization ] [ mirror-prox ] [ Missing Data Inference ] [ Missing value imputation ] [ Missing Values ] [ misssing data ] [ mixed precision ] [ Mixed Precision ] [ Mixed-precision quantization ] [ mixture density nets ] [ mixture of experts ] [ mixup ] [ Mixup ] [ MixUp ] [ MLaaS ] [ MoCo ] [ Model Attribution ] [ model-based control ] [ model-based learning ] [ Model-based Reinforcement Learning ] [ Model-Based Reinforcement Learning ] [ model-based RL ] [ Model-based RL ] [ Model Biases ] [ Model compression ] [ model extraction ] [ model fairness ] [ Model Inversion ] [ model order reduction ] [ model ownership ] [ model predictive control ] [ model-predictive control ] [ Model Predictive Control ] [ Model privacy ] [ Models for code ] [ models of learning and generalization ] [ Model stealing ] [ Modern Hopfield Network ] [ modern Hopfield networks ] [ modified equation analysis ] [ modular architectures ] [ Modular network ] [ modular networks ] [ modular neural networks ] [ modular representations ] [ modulated convolution ] [ Molecular conformation generation ] [ molecular design ] [ Molecular Dynamics ] [ molecular graph generation ] [ Molecular Representation ] [ Molecule Design ] [ Momentum ] [ momentum methods ] [ momentum optimizer ] [ monotonicity ] [ Monte Carlo ] [ Monte-Carlo tree search ] [ Monte Carlo Tree Search ] [ morphology ] [ Morse theory ] [ mpc ] [ Multi-agent ] [ Multi-agent games ] [ Multiagent Learning ] [ multi-agent platform ] [ Multi-Agent Policy Gradients ] [ Multi-agent reinforcement learning ] [ Multi-agent Reinforcement Learning ] [ Multi-Agent Reinforcement Learning ] [ Multi-Agent Transfer Learning ] [ multiclass classification ] [ multi-dimensional discrete action spaces ] [ Multi-domain ] [ multi-domain disentanglement ] [ multi-head attention ] [ Multi-Hop ] [ multi-hop question answering ] [ Multi-hop Reasoning ] [ Multilingual Modeling ] [ multilingual representations ] [ multilingual transformer ] [ multilingual translation ] [ Multimodal ] [ Multi-Modal ] [ Multimodal Attention ] [ multi-modal learning ] [ Multimodal Learning ] [ Multi-Modal Learning ] [ Multimodal Spaces ] [ Multi-objective optimization ] [ multi-player ] [ Multiplicative Weights Update ] [ Multi-scale Representation ] [ multitask ] [ Multi-task ] [ Multi-task Learning ] [ Multi Task Learning ] [ Multi-Task Learning ] [ multi-task learning theory ] [ Multitask Reinforcement Learning ] [ Multi-view Learning ] [ Multi-View Learning ] [ Multi-view Representation Learning ] [ Mutual Information ] [ MuZero ] [ Named Entity Recognition ] [ NAS ] [ nash ] [ natural gradient descent ] [ Natural Language Processing ] [ natural scene statistics ] [ natural sparsity ] [ Negative Sampling ] [ negotiation ] [ nested optimization ] [ network architecture ] [ Network Architecture ] [ Network Inductive Bias ] [ network motif ] [ Network pruning ] [ Network Pruning ] [ networks ] [ network trainability ] [ network width ] [ Neural Architecture Search ] [ Neural Attention Distillation ] [ neural collapse ] [ Neural data compression ] [ Neural IR ] [ neural kernels ] [ neural link prediction ] [ Neural Model Explanation ] [ neural module network ] [ Neural Network ] [ Neural Network Bounding ] [ neural network calibration ] [ Neural Network Gaussian Process ] [ neural network robustness ] [ Neural networks ] [ Neural Networks ] [ neural network training ] [ Neural Network Verification ] [ neural ode ] [ Neural ODE ] [ Neural ODEs ] [ Neural operators ] [ Neural Physics Engines ] [ Neural Processes ] [ neural reconstruction ] [ neural sound synthesis ] [ neural spike train ] [ neural symbolic reasoning ] [ neural tangent kernel ] [ Neural tangent kernel ] [ Neural Tangent Kernel ] [ neural tangent kernels ] [ Neural text decoding ] [ neurobiology ] [ Neuroevolution ] [ Neuro symbolic ] [ Neuro-Symbolic Learning ] [ neuro-symbolic models ] [ NLI ] [ NLP ] [ Node Embeddings ] [ noise contrastive estimation ] [ Noise-contrastive learning ] [ Noise model ] [ noise robust learning ] [ Noisy Demonstrations ] [ noisy label ] [ Noisy Label ] [ Noisy Labels ] [ Non-asymptotic Confidence Intervals ] [ non-autoregressive generation ] [ nonconvex ] [ non-convex learning ] [ Non-Convex Optimization ] [ Non-IID ] [ nonlinear control theory ] [ nonlinear dynamical systems ] [ nonlinear Hawkes process ] [ nonlinear walk ] [ Non-Local Modules ] [ non-minimax optimization ] [ nonnegative PCA ] [ nonseparable Hailtonian system ] [ non-smooth models ] [ non-stationary stochastic processes ] [ no-regret learning ] [ normalized maximum likelihood ] [ normalize layer ] [ normalizers ] [ Normalizing Flow ] [ normalizing flows ] [ Normalizing flows ] [ Normalizing Flows ] [ normative models ] [ novelty-detection ] [ ntk ] [ number of linear regions ] [ numerical errors ] [ numerical linear algebra ] [ object-centric representations ] [ Object detection ] [ Object Detection ] [ object-keypoint representations ] [ ObjectNet ] [ Object Permanence ] [ Observational Imitation ] [ ODE ] [ offline ] [ offline/batch reinforcement learning ] [ off-line reinforcement learning ] [ offline reinforcement learning ] [ Offline Reinforcement Learning ] [ offline RL ] [ off-policy evaluation ] [ Off Policy Evaluation ] [ Off-policy policy evaluation ] [ Off-Policy Reinforcement Learning ] [ off-policy RL ] [ one-class-classification ] [ one-to-many mapping ] [ Open-domain ] [ open domain complex question answering ] [ open source ] [ Optimal Control Theory ] [ optimal convergence ] [ optimal power flow ] [ Optimal Transport ] [ optimal transport maps ] [ Optimisation for Deep Learning ] [ optimism ] [ Optimistic Gradient Descent Ascent ] [ Optimistic Mirror Decent ] [ Optimistic Multiplicative Weights Update ] [ Optimization ] [ order learning ] [ ordinary differential equation ] [ orthogonal ] [ orthogonal layers ] [ orthogonal machine learning ] [ Orthogonal Polynomials ] [ Oscillators ] [ outlier detection ] [ outlier-detection ] [ Outlier detection ] [ out-of-distribution ] [ Out-of-distribution detection in deep learning ] [ out-of-distribution generalization ] [ Out-of-domain ] [ over-fitting ] [ Overfitting ] [ overparameterisation ] [ over-parameterization ] [ Over-parameterization ] [ Overparameterization ] [ overparameterized neural networks ] [ Over-smoothing ] [ Oversmoothing ] [ over-squashing ] [ PAC Bayes ] [ padding ] [ parallel Monte Carlo Tree Search (MCTS) ] [ parallel tempering ] [ Parameter-Reduced MLR ] [ part-based ] [ Partial Amortization ] [ Partial differential equation ] [ partial differential equations ] [ partially observed environments ] [ particle inference ] [ pca ] [ pde ] [ pdes ] [ PDEs ] [ performer ] [ persistence diagrams ] [ personalized learning ] [ perturbation sets ] [ Peter-Weyl Theorem ] [ phase retrieval ] [ Physical parameter estimation ] [ physical reasoning ] [ physical scene understanding ] [ Physical Simulation ] [ physical symbol grounding ] [ physics ] [ physics-guided deep learning ] [ piecewise linear function ] [ pipeline toolkit ] [ plan-based reward shaping ] [ Planning ] [ Poincaré Ball Model ] [ Point cloud ] [ Point clouds ] [ point processes ] [ pointwise mutual information ] [ poisoning ] [ poisoning attack ] [ poisson matrix factorization ] [ policy learning ] [ Policy Optimization ] [ polynomial time ] [ Pose Estimation ] [ Position Embedding ] [ Position Encoding ] [ post-hoc calibration ] [ Post-Hoc Correction ] [ Post Training Quantization ] [ power grid management ] [ Predictive Modeling ] [ predictive uncertainty ] [ Predictive Uncertainty Estimation ] [ pretrained language model ] [ pretrained language model. ] [ pre-trained language model fine-tuning ] [ Pretrained Language Models ] [ Pretrained Text Encoders ] [ pre-training ] [ Pre-training ] [ Primitive Discovery ] [ principal components analysis ] [ Privacy ] [ privacy leakage from gradients ] [ privacy preserving machine learning ] [ Privacy-utility tradeoff ] [ probabelistic models ] [ probabilistic generative models ] [ probabilistic inference ] [ probabilistic matrix factorization ] [ Probabilistic Methods ] [ probabilistic multivariate forecasting ] [ probabilistic numerics ] [ probabilistic programs ] [ probably approximated correct guarantee ] [ Probe ] [ probing ] [ procedural generation ] [ procedural knowledge ] [ product of experts ] [ Product Quantization ] [ Program obfuscation ] [ Program Synthesis ] [ Proper Scoring Rules ] [ protein ] [ prototype propagation ] [ Provable Robustness ] [ provable sample efficiency ] [ proximal gradient descent-ascent ] [ proxy ] [ Pruning ] [ Pruning at initialization ] [ pseudo-labeling ] [ Pseudo-Labeling ] [ QA ] [ Q-learning ] [ Quantization ] [ quantum machine learning ] [ quantum mechanics ] [ Quantum Mechanics ] [ Question Answering ] [ random ] [ Random Feature ] [ Random Features ] [ Randomized Algorithms ] [ Random Matrix Theory ] [ Random Weights Neural Networks ] [ rank-collapse ] [ rank-constrained convex optimization ] [ rao ] [ rao-blackwell ] [ Rate-distortion optimization ] [ raven's progressive matrices ] [ real time recurrent learning ] [ real-world ] [ Real-world image denoising ] [ reasoning paths ] [ recommendation systems ] [ recommender system ] [ Recommender Systems ] [ recovery likelihood ] [ rectified linear unit ] [ Recurrent Generative Model ] [ Recurrent Neural Network ] [ Recurrent neural networks ] [ Recurrent Neural Networks ] [ recursive dense retrieval ] [ reformer ] [ regime agnostic methods ] [ Regression ] [ Regression without correspondence ] [ regret analysis ] [ regret minimization ] [ Regularization ] [ Regularization by denoising ] [ regularized markov decision processes ] [ Reinforcement ] [ Reinforcement learning ] [ Reinforcement Learning ] [ Reinforcement Learnings ] [ Reinforcement learning theory ] [ relabelling ] [ Relational regularized autoencoder ] [ Relation Extraction ] [ relaxed regularization ] [ relu network ] [ ReLU networks ] [ Rematerialization ] [ Render-and-Compare ] [ Reparameterization ] [ repetitions ] [ replica exchange ] [ representational learning ] [ representation analysis ] [ Representation learning ] [ Representation Learning ] [ representation learning for computer vision ] [ representation learning for robotics ] [ representation of dynamical systems ] [ Representation Theory ] [ reproducibility ] [ reproducible research ] [ Reproducing kernel Hilbert space ] [ resampling ] [ reset-free ] [ residual ] [ ResNets ] [ resource constrained ] [ Restricted Boltzmann Machines ] [ retraining ] [ Retrieval ] [ reverse accuracy ] [ reverse engineering ] [ reward learning ] [ reward randomization ] [ reward shaping ] [ reweighting ] [ Rich observation ] [ rich observations ] [ risk-averse ] [ Risk bound ] [ Risk Estimation ] [ risk sensitive ] [ rl ] [ RMSprop ] [ RNA-protein interaction prediction ] [ RNA structure ] [ RNA structure embedding ] [ RNN ] [ RNNs ] [ robotic manipulation ] [ robust ] [ robust control ] [ robust deep learning ] [ Robust Deep Learning ] [ robust learning ] [ Robust Learning ] [ Robust Machine Learning ] [ Robustness ] [ Robustness certificates ] [ Robust Overfitting ] [ ROC ] [ Role-Based Learning ] [ rooted graphs ] [ Rotation invariance ] [ rtrl ] [ Runtime Systems ] [ Saddle-point Optimization ] [ safe ] [ Safe exploration ] [ safe planning ] [ Saliency ] [ Saliency Guided Data Augmentation ] [ saliency maps ] [ SaliencyMix ] [ sample complexity separation ] [ Sample Efficiency ] [ sample information ] [ sample reweighting ] [ Sampling ] [ sampling algorithms ] [ Scalability ] [ Scale ] [ scale-invariant weights ] [ Scale of initialization ] [ scene decomposition ] [ scene generation ] [ Scene Understanding ] [ Science ] [ science of deep learning ] [ score-based generative models ] [ score matching ] [ score-matching ] [ SDE ] [ Second-order analysis ] [ second-order approximation ] [ second-order optimization ] [ Security ] [ segmented models ] [ selective classification ] [ Self-Imitation ] [ self supervised learning ] [ Self-supervised learning ] [ Self-supervised Learning ] [ Self Supervised Learning ] [ Self-Supervised Learning ] [ self-supervision ] [ self-training ] [ self-training theory ] [ semantic anomaly detection ] [ semantic directions in latent space ] [ semantic graphs ] [ Semantic Image Synthesis ] [ semantic parsing ] [ semantic role labeling ] [ semantic-segmentation ] [ Semantic Segmentation ] [ Semantic Textual Similarity ] [ semi-infinite duality ] [ semi-nonnegative matrix factorization ] [ semiparametric inference ] [ semi-supervised ] [ Semi-supervised Learning ] [ Semi-Supervised Learning ] [ semi-supervised learning theory ] [ Sentence Embeddings ] [ Sentence Representations ] [ Sentiment ] [ separation of variables ] [ Sequence Data ] [ Sequence Modeling ] [ sequence models ] [ Sequence-to-sequence learning ] [ sequence-to-sequence models ] [ sequential data ] [ Sequential probability ratio test ] [ Sequential Representation Learning ] [ set prediction ] [ set transformer ] [ SGD ] [ SGD noise ] [ sgld ] [ Shape ] [ shape bias ] [ Shape Bias ] [ Shape Encoding ] [ shapes ] [ Shapley values ] [ Sharpness Minimization ] [ side channel analysis ] [ Sigma Delta Quantization ] [ sign agnostic learning ] [ signal propagation ] [ signature ] [ sim2real ] [ sim2real transfer ] [ simple ] [ Singularity analysis ] [ singular value decomposition ] [ Sinkhorn algorithm ] [ skeleton-based action recognition ] [ sketch-based modeling ] [ sketches ] [ Skill Discovery ] [ SLAM ] [ sliced fused Gromov Wasserstein ] [ Sliced Wasserstein ] [ Slowdown attacks ] [ slowness ] [ Smooth games ] [ smoothing ] [ SMT Solvers ] [ social perception ] [ Soft Body ] [ soft labels ] [ software ] [ sound classification ] [ sound spatialization ] [ Source Code ] [ sparse Bayesian learning ] [ Sparse Embedding ] [ sparse embeddings ] [ sparse reconstruction ] [ sparse representation ] [ sparse representations ] [ sparse stochastic gates ] [ Sparsity ] [ Sparsity Learning ] [ spatial awareness ] [ spatial bias ] [ spatial uncertainty ] [ spatio-temporal forecasting ] [ spatio-temporal graph ] [ spatio-temporal modeling ] [ spatio-temporal modelling ] [ spatiotemporal prediction ] [ Spatiotemporal Understanding ] [ Spectral Analysis ] [ Spectral Distribution ] [ Spectral Graph Filter ] [ spectral regularization ] [ speech generation ] [ speech-impaired ] [ speech processing ] [ speech recognition. ] [ Speech Recognition ] [ spherical distributions ] [ spiking neural network ] [ spurious correlations ] [ square loss vs cross-entropy ] [ stability theory ] [ State abstraction ] [ state abstractions ] [ state-space models ] [ statistical learning theory ] [ Statistical Learning Theory ] [ statistical physics ] [ Statistical Physics ] [ statistical physics methods ] [ Steerable Kernel ] [ Stepsize optimization ] [ stochastic asymptotics ] [ stochastic control ] [ (stochastic) gradient descent ] [ Stochastic Gradient Descent ] [ stochastic gradient Langevin dynamics ] [ stochastic process ] [ Stochastic Processes ] [ stochastic subgradient method ] [ Storage Capacity ] [ straight-through ] [ straightthrough ] [ strategic behavior ] [ Streaming ASR ] [ structural biology ] [ structural credit assignment ] [ structural inductive bias ] [ Structured Pruning ] [ Structure learning ] [ structure prediction ] [ structures prediction ] [ Style Mixing ] [ Style Transfer ] [ subgraph reasoning. ] [ sublinear ] [ submodular optimization ] [ Subspace clustering ] [ Summarization ] [ summary statistics ] [ superpixel ] [ supervised contrastive learning ] [ Supervised Deep Networks ] [ Supervised Learning ] [ support estimation ] [ surprisal ] [ surrogate models ] [ svd ] [ SVD ] [ Symbolic Methods ] [ symbolic regression ] [ symbolic representations ] [ Symmetry ] [ symplectic networks ] [ Syntax ] [ Synthetic benchmark dataset ] [ synthetic-to-real generalization ] [ Systematic generalisation ] [ Systematicity ] [ System identification ] [ Tabular ] [ tabular data ] [ Tabular Data ] [ targeted attack ] [ Task Embeddings ] [ task generation ] [ task-oriented dialogue ] [ Task-oriented Dialogue System ] [ task reduction ] [ Task Segmentation ] [ Teacher-Student Learning ] [ teacher-student model ] [ temporal context ] [ Temporal knowledge graph ] [ temporal networks ] [ tensor product ] [ Text-based Games ] [ Text Representation ] [ Text Retrieval ] [ Text to speech ] [ Text to speech synthesis ] [ text-to-sql ] [ Texture ] [ Texture Bias ] [ Textworld ] [ Theorem proving ] [ theoretical issues in deep learning ] [ theoretical limits ] [ theoretical study ] [ Theory ] [ Theory of deep learning ] [ theory of mind ] [ Third-Person Imitation ] [ Thompson sampling ] [ time-frequency representations ] [ timescale ] [ timescales ] [ Time Series ] [ Time series forecasting ] [ time series prediction ] [ topic modelling ] [ Topology ] [ training dynamics ] [ Training Method ] [ trajectory ] [ trajectory optimization ] [ trajectory prediction ] [ Transferability ] [ Transfer learning ] [ Transfer Learning ] [ transformation invariance ] [ Transformer ] [ Transformers ] [ traveling salesperson problem ] [ Tree-structured Data ] [ trembl ] [ tropical function ] [ trust region ] [ two-layer neural network ] [ Uncertainty ] [ uncertainty calibration ] [ Uncertainty estimates ] [ Uncertainty estimation ] [ Uncertainty Machine Learning ] [ understanding ] [ understanding CNNs ] [ Understanding Data Augmentation ] [ understanding decision-making ] [ understanding deep learning ] [ Understanding Deep Learning ] [ understanding neural networks ] [ U-Net ] [ unidirectional ] [ uniprot ] [ universal approximation ] [ Universal approximation ] [ Universality ] [ universal representation learning ] [ universal sound separation ] [ unlabeled data ] [ Unlabeled Entity Problem ] [ Unlearnable Examples ] [ unrolled algorithms ] [ Unsupervised denoising ] [ Unsupervised Domain Translation ] [ unsupervised image denoising ] [ Unsupervised learning ] [ Unsupervised Learning ] [ unsupervised learning theory ] [ unsupervised loss ] [ Unsupervised Meta-learning ] [ unsupervised object discovery ] [ Unsupervised reinforcement learning ] [ unsupervised skill discovery ] [ unsupervised stabilization ] [ Upper Confidence bound applied to Trees (UCT) ] [ Usable Information ] [ VAE ] [ Value factorization ] [ value learning ] [ vanishing gradient problem ] [ variable binding ] [ variable convergence ] [ Variable Embeddings ] [ Variance Networks ] [ Variational Auto-encoder ] [ Variational autoencoders ] [ Variational Autoencoders ] [ Variational inference ] [ variational information bottleneck ] [ Verification ] [ video analysis ] [ Video Classification ] [ Video Compression ] [ video generation ] [ video-grounded dialogues ] [ Video prediction ] [ Video Reasoning ] [ video recognition ] [ Video Recognition ] [ video representation learning ] [ video synthesis ] [ video-text learning ] [ views ] [ virtual environment ] [ vision-and-language-navigation ] [ visual counting ] [ visualization ] [ visual perception ] [ Visual Reasoning ] [ visual reinforcement learning ] [ visual representation learning ] [ visual saliency ] [ vocoder ] [ voice conversion ] [ Volume Analysis ] [ VQA ] [ vulnerability of RL ] [ wanet ] [ warping functions ] [ Wasserstein ] [ wasserstein-2 barycenters ] [ wasserstein-2 distance ] [ Wasserstein distance ] [ waveform generation ] [ weakly-supervised learning ] [ weakly supervised representation learning ] [ Weak supervision ] [ Weak-supervision ] [ webly-supervised learning ] [ weight attack ] [ weight balance ] [ Weight quantization ] [ weight-sharing ] [ wide local minima ] [ Wigner-Eckart Theorem ] [ winning tickets ] [ wireframe model ] [ word-learning ] [ world models ] [ World Models ] [ worst-case generalisation ] [ xai ] [ XAI ] [ zero-order optimization ] [ zero-shot learning ] [ Zero-shot learning ] [ Zero-shot Learning ] [ Zero-shot synthesis ]

188 Results

Poster
Mon 1:00 Randomized Ensembled Double Q-Learning: Learning Fast Without a Model
Xinyue Chen, Che Wang, Zijian Zhou, Keith Ross
Poster
Mon 1:00 Temporally-Extended ε-Greedy Exploration
Will Dabney, Georg Ostrovski, Andre Barreto
Poster
Mon 1:00 Noise against noise: stochastic label noise helps combat inherent label noise
Pengfei Chen, Guangyong Chen, Junjie Ye, jingwei zhao, Pheng-Ann Heng
Poster
Mon 1:00 The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods
Louis THIRY, Michael Arbel, Eugene Belilovsky, Edouard Oyallon
Poster
Mon 1:00 Revisiting Locally Supervised Learning: an Alternative to End-to-end Training
Yulin Wang, Zanlin Ni, Shiji Song, Le Yang, Gao Huang
Poster
Mon 1:00 Batch Reinforcement Learning Through Continuation Method
Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed H. Chi, Honglak Lee, Minmin Chen
Poster
Mon 1:00 Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach
Malik Tiomoko, Hafiz Tiomoko Ali, Romain Couillet
Poster
Mon 1:00 MetaNorm: Learning to Normalize Few-Shot Batches Across Domains
Yingjun Du, Xiantong Zhen, Ling Shao, Cees G Snoek
Poster
Mon 1:00 Scalable Transfer Learning with Expert Models
Joan Puigcerver Puigcerver i Perez, Carlos Riquelme, Basil Mustafa, Cedric Renggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, Neil Houlsby
Oral
Mon 3:15 Free Lunch for Few-shot Learning: Distribution Calibration
Shuo Yang, Lu Liu, Min Xu
Spotlight
Mon 3:30 Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach
Malik Tiomoko, Hafiz Tiomoko Ali, Romain Couillet
Spotlight
Mon 4:40 How Benign is Benign Overfitting ?
Amartya Sanyal, Puneet Dokania, Varun Kanade, Philip Torr
Spotlight
Mon 5:45 Contrastive Divergence Learning is a Time Reversal Adversarial Game
Omer Yair, Tomer Michaeli
Poster
Mon 9:00 Learning explanations that are hard to vary
Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Schoelkopf
Poster
Mon 9:00 Universal approximation power of deep residual neural networks via nonlinear control theory
Paulo Tabuada, Bahman Gharesifard
Poster
Mon 9:00 Effective Distributed Learning with Random Features: Improved Bounds and Algorithms
Yong Liu, Jiankun Liu, Shuqiang Wang
Poster
Mon 9:00 Shape-Texture Debiased Neural Network Training
Yinigwei Li, Qihang Yu, Mingxing Tan, Jieru Mei, Peng Tang, Wei Shen, Alan Yuille, Cihang Xie
Poster
Mon 9:00 Training GANs with Stronger Augmentations via Contrastive Discriminator
Jongheon Jeong, Jinwoo Shin
Poster
Mon 9:00 Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Ben Eysenbach, Shreyas Chaudhari, Swapnil Asawa, Sergey Levine, Ruslan Salakhutdinov
Poster
Mon 9:00 On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
Marius Mosbach, Maksym Andriushchenko, Dietrich Klakow
Poster
Mon 9:00 MoVie: Revisiting Modulated Convolutions for Visual Counting and Beyond
Duy-Kien Nguyen, Vedanuj Goswami, Xinlei Chen
Poster
Mon 9:00 Primal Wasserstein Imitation Learning
Robert Dadashi, Hussenot Hussenot-Desenonges, Matthieu Geist, Olivier Pietquin
Poster
Mon 9:00 Uncertainty Sets for Image Classifiers using Conformal Prediction
Anastasios Angelopoulos, Stephen Bates, Michael Jordan, Jitendra Malik
Poster
Mon 9:00 X2T: Training an X-to-Text Typing Interface with Online Learning from User Feedback
Jensen Gao, Siddharth Reddy, Glen Berseth, Nick Hardy, Nikhilesh Natraj, Karunesh Ganguly, Anca Dragan, Sergey Levine
Poster
Mon 9:00 Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Denis Yarats, Ilya Kostrikov, Rob Fergus
Poster
Mon 9:00 The Risks of Invariant Risk Minimization
Elan Rosenfeld, Pradeep K Ravikumar, Andrej Risteski
Poster
Mon 9:00 On the role of planning in model-based deep reinforcement learning
Jessica Hamrick, Abram Friesen, Feryal Behbahani, Arthur Guez, Fabio Viola, Sims Witherspoon, Thomas Anthony, Lars Buesing, Petar Veličković, Theo Weber
Poster
Mon 9:00 Learning with AMIGo: Adversarially Motivated Intrinsic Goals
Andres Campero, Roberta Raileanu, Heinrich Kuttler, Joshua B Tenenbaum, Tim Rocktaeschel, Ed Grefenstette
Oral
Mon 11:30 Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan, Pedro Savarese, Michael Maire
Spotlight
Mon 11:45 Geometry-Aware Gradient Algorithms for Neural Architecture Search
Liam Li, Misha Khodak, Nina Balcan, Ameet Talwalkar
Spotlight
Mon 12:35 Systematic generalisation with group invariant predictions
Faruk Ahmed, Yoshua Bengio, Harm van Seijen, Aaron Courville
Spotlight
Mon 13:20 Uncertainty Sets for Image Classifiers using Conformal Prediction
Anastasios Angelopoulos, Stephen Bates, Michael Jordan, Jitendra Malik
Spotlight
Mon 13:40 Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
Zirui Wang, Yulia Tsvetkov, Orhan Firat, Yuan Cao
Poster
Mon 17:00 MoPro: Webly Supervised Learning with Momentum Prototypes
Junnan Li, Caiming Xiong, Steven Hoi
Poster
Mon 17:00 MixKD: Towards Efficient Distillation of Large-scale Language Models
Kevin Liang, Weituo Hao, Dinghan Shen, Yufan Zhou, Weizhu Chen, Changyou Chen, Lawrence Carin
Poster
Mon 17:00 Learning a Latent Simplex in Input Sparsity Time
Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David Woodruff, Samson Zhou
Poster
Mon 17:00 Representation Learning for Sequence Data with Deep Autoencoding Predictive Components
Junwen Bai, Weiran Wang, Yingbo Zhou, Caiming Xiong
Poster
Mon 17:00 One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks
Atish Agarwala, Abhimanyu Das, Brendan Juba, Rina Panigrahy, Vatsal Sharan, Xin Wang, Qiuyi Zhang
Poster
Mon 17:00 Undistillable: Making A Nasty Teacher That CANNOT teach students
Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang
Poster
Mon 17:00 SOLAR: Sparse Orthogonal Learned and Random Embeddings
Tharun Medini Medini, Beidi Chen, Anshumali Shrivastava
Poster
Mon 17:00 Layer-adaptive Sparsity for the Magnitude-based Pruning
Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Poster
Mon 17:00 Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E Gonzalez, Marcus Rohrbach, trevor darrell
Poster
Mon 17:00 PseudoSeg: Designing Pseudo Labels for Semantic Segmentation
Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang, Tomas Pfister
Poster
Mon 17:00 Self-training For Few-shot Transfer Across Extreme Task Differences
Cheng Phoo, Bharath Hariharan
Spotlight
Mon 20:18 Improving Adversarial Robustness via Channel-wise Activation Suppressing
Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Daniel Ma, Yisen Wang
Spotlight
Mon 20:28 Fast Geometric Projections for Local Robustness Certification
Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, Corina Pasareanu
Oral
Mon 21:21 How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Keyulu Xu, Mozhi Zhang, Jingling Li, Simon Du, Ken-Ichi Kawarabayashi, Stefanie Jegelka
Poster
Tue 1:00 Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing
Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad
Poster
Tue 1:00 Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator
Max B Paulus, Chris Maddison, Andreas Krause
Poster
Tue 1:00 Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski, Roland Zimmermann, Judith Schepers, Robert Geirhos, Thomas S Wallis, Matthias Bethge, Wieland Brendel
Poster
Tue 1:00 FedMix: Approximation of Mixup under Mean Augmented Federated Learning
Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang
Poster
Tue 1:00 Class Normalization for (Continual)? Generalized Zero-Shot Learning
Ivan Skorokhodov, Mohamed Elhoseiny
Poster
Tue 1:00 Accurate Learning of Graph Representations with Graph Multiset Pooling
Jinheon Baek, Minki Kang, Sung Ju Hwang
Poster
Tue 1:00 Activation-level uncertainty in deep neural networks
Pablo Morales-Alvarez, Daniel Hernández-Lobato, Rafael Molina, José Miguel Hernández Lobato
Poster
Tue 1:00 Effective Abstract Reasoning with Dual-Contrast Network
Tao Zhuo, Mohan Kankanhalli
Poster
Tue 1:00 Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies
Dominik Schmidt, Georgia Koppe, Zahra Monfared, Max Beutelspacher, Daniel Durstewitz
Poster
Tue 1:00 A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention
Grégoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal
Poster
Tue 1:00 Lossless Compression of Structured Convolutional Models via Lifting
Gustav Sourek, Filip Zelezny, Ondrej Kuzelka
Spotlight
Tue 3:25 Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull, Nicola De Cao, Ivan Titov
Oral
Tue 4:08 Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator
Max B Paulus, Chris Maddison, Andreas Krause
Spotlight
Tue 4:38 Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, Roland Vollgraf
Spotlight
Tue 4:48 Noise against noise: stochastic label noise helps combat inherent label noise
Pengfei Chen, Guangyong Chen, Junjie Ye, jingwei zhao, Pheng-Ann Heng
Spotlight
Tue 5:28 Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies
Dominik Schmidt, Georgia Koppe, Zahra Monfared, Max Beutelspacher, Daniel Durstewitz
Poster
Tue 9:00 Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
Zhiyuan Li, Yuping Luo, Kaifeng Lyu
Poster
Tue 9:00 The geometry of integration in text classification RNNs
Kyle Aitken, Vinay Ramasesh, Ankush Garg, Yuan Cao, David Sussillo, Niru Maheswaranathan
Poster
Tue 9:00 Systematic generalisation with group invariant predictions
Faruk Ahmed, Yoshua Bengio, Harm van Seijen, Aaron Courville
Poster
Tue 9:00 Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Daochen Zha, Wenye Ma, Lei Yuan, Xia Hu, Ji Liu
Poster
Tue 9:00 On the Dynamics of Training Attention Models
Haoye Lu, Yongyi Mao, Amiya Nayak
Poster
Tue 9:00 How Benign is Benign Overfitting ?
Amartya Sanyal, Puneet Dokania, Varun Kanade, Philip Torr
Poster
Tue 9:00 Support-set bottlenecks for video-text representation learning
Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander G Hauptmann, Joao F. Henriques, Andrea Vedaldi
Poster
Tue 9:00 Characterizing signal propagation to close the performance gap in unnormalized ResNets
Andrew Brock, Soham De, Samuel Smith
Poster
Tue 9:00 Distance-Based Regularisation of Deep Networks for Fine-Tuning
Henry Gouk, Timothy Hospedales, massimiliano pontil
Oral
Tue 11:00 Iterated learning for emergent systematicity in VQA
Ankit Vani, Max Schwarzer, Yuchen Lu, Eeshan Dhekane, Aaron Courville
Spotlight
Tue 11:30 How Does Mixup Help With Robustness and Generalization?
Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
Oral
Tue 12:00 Randomized Automatic Differentiation
Deniz Oktay, Nick McGreivy, Joshua Aduol, Alex Beatson, Ryan P Adams
Poster
Tue 17:00 Learning to Reach Goals via Iterated Supervised Learning
Dibya Ghosh, Abhishek Gupta, Ashwin D Reddy, Justin Fu, Coline M Devin, Ben Eysenbach, Sergey Levine
Poster
Tue 17:00 Contextual Dropout: An Efficient Sample-Dependent Dropout Module
XINJIE FAN, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, Mingyuan Zhou
Poster
Tue 17:00 Linear Mode Connectivity in Multitask and Continual Learning
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Dilan Gorur, Razvan Pascanu, Hassan Ghasemzadeh
Poster
Tue 17:00 CompOFA – Compound Once-For-All Networks for Faster Multi-Platform Deployment
Manas Sahni, Shreya Varshini, Alind Khare, Alexey Tumanov
Poster
Tue 17:00 Fuzzy Tiling Activations: A Simple Approach to Learning Sparse Representations Online
Yangchen Pan, Kirby Banman, Martha White
Poster
Tue 17:00 DrNAS: Dirichlet Neural Architecture Search
Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, Cho-Jui Hsieh
Poster
Tue 17:00 Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust Exploration
Jaekyeom Kim, Minjung Kim, Dongyeon Woo, Gunhee Kim
Poster
Tue 17:00 Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
Thao Nguyen, Maithra Raghu, Simon Kornblith
Poster
Tue 17:00 BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
Augustus Odena, Kensen Shi, David Bieber, Rishabh Singh, Charles Sutton, Hanjun Dai
Poster
Tue 17:00 How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Keyulu Xu, Mozhi Zhang, Jingling Li, Simon Du, Ken-Ichi Kawarabayashi, Stefanie Jegelka
Poster
Tue 17:00 A Temporal Kernel Approach for Deep Learning with Continuous-time Information
Da Xu, Chuanwei Ruan, evren korpeoglu, Sushant Kumar, kannan achan
Poster
Tue 17:00 Discrete Graph Structure Learning for Forecasting Multiple Time Series
Chao Shang, Jie Chen, Jinbo Bi
Oral
Tue 19:00 Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients
Brenden Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Santiago, Soo Kim, Joanne Kim
Poster
Wed 1:00 Simple Spectral Graph Convolution
Hao Zhu, Piotr Koniusz
Poster
Wed 1:00 Robust Learning of Fixed-Structure Bayesian Networks in Nearly-Linear Time
Yu Cheng, Honghao Lin
Poster
Wed 1:00 Improving Adversarial Robustness via Channel-wise Activation Suppressing
Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Daniel Ma, Yisen Wang
Poster
Wed 1:00 Knowledge distillation via softmax regression representation learning
Jing Yang, Brais Martinez, Adrian Bulat, Georgios Tzimiropoulos
Poster
Wed 1:00 Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
Zhenggang Tang, Chao Yu, Boyuan Chen, Huazhe Xu, Xiaolong Wang, Fei Fang, Simon Du, Yu Wang, Yi Wu
Poster
Wed 1:00 Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Mingyang Yi, LU HOU, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma
Poster
Wed 1:00 Gradient Origin Networks
Sam Bond-Taylor, Chris G Willcocks
Poster
Wed 1:00 On Data-Augmentation and Consistency-Based Semi-Supervised Learning
Atin Ghosh, alexandre thiery
Poster
Wed 1:00 Differentiable Segmentation of Sequences
Erik Scharwächter, Jonathan Lennartz, Emmanuel Müller
Poster
Wed 1:00 Fooling a Complete Neural Network Verifier
Dániel Zombori, Balázs Bánhelyi, Tibor Csendes, István Megyeri, Márk Jelasity
Poster
Wed 1:00 FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization
Lanqing Li, Rui Yang, Dijun Luo
Poster
Wed 1:00 A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong, Yigitcan Kaya, Ionut-Vlad Modoranu, Tudor Dumitras
Spotlight
Wed 3:45 Support-set bottlenecks for video-text representation learning
Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander G Hauptmann, Joao F. Henriques, Andrea Vedaldi
Poster
Wed 9:00 Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, Roland Vollgraf
Poster
Wed 9:00 Multiplicative Filter Networks
Rizal Fathony, Anit Kumar Sahu, Devin Willmott, Zico Kolter
Poster
Wed 9:00 Modeling the Second Player in Distributionally Robust Optimization
Paul Michel, Tatsunori Hashimoto, Graham Neubig
Poster
Wed 9:00 Geometry-Aware Gradient Algorithms for Neural Architecture Search
Liam Li, Misha Khodak, Nina Balcan, Ameet Talwalkar
Poster
Wed 9:00 For self-supervised learning, Rationality implies generalization, provably
Yamini Bansal, Gal Kaplun, Boaz Barak
Poster
Wed 9:00 Iterated learning for emergent systematicity in VQA
Ankit Vani, Max Schwarzer, Yuchen Lu, Eeshan Dhekane, Aaron Courville
Poster
Wed 9:00 Unsupervised Audiovisual Synthesis via Exemplar Autoencoders
Kangle Deng, Aayush Bansal, Deva Ramanan
Poster
Wed 9:00 Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients
Brenden Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Santiago, Soo Kim, Joanne Kim
Poster
Wed 9:00 Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull, Nicola De Cao, Ivan Titov
Poster
Wed 9:00 Unbiased Teacher for Semi-Supervised Object Detection
Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, Peter Vajda
Poster
Wed 9:00 How Does Mixup Help With Robustness and Generalization?
Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
Poster
Wed 9:00 Entropic gradient descent algorithms and wide flat minima
Fabrizio Pittorino, Carlo Lucibello, Christoph Feinauer, Gabriele Perugini, Carlo Baldassi, Elizaveta Demyanenko, Riccardo Zecchina
Poster
Wed 9:00 Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
Poster
Wed 9:00 Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan, Pedro Savarese, Michael Maire
Oral
Wed 11:15 Learning to Reach Goals via Iterated Supervised Learning
Dibya Ghosh, Abhishek Gupta, Ashwin D Reddy, Justin Fu, Coline M Devin, Ben Eysenbach, Sergey Levine
Oral
Wed 11:45 Evolving Reinforcement Learning Algorithms
John Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Quoc V Le, Sergey Levine, Honglak Lee, Aleksandra Faust
Spotlight
Wed 12:00 Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Denis Yarats, Ilya Kostrikov, Rob Fergus
Spotlight
Wed 13:38 Dynamic Tensor Rematerialization
Marisa Kirisame, Steven S. Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared G Roesch, Tianqi Chen, Zachary Tatlock
Spotlight
Wed 13:58 Differentially Private Learning Needs Better Features (or Much More Data)
Florian Tramer, Dan Boneh
Poster
Wed 17:00 Mixed-Features Vectors and Subspace Splitting
Alejandro Pimentel-Alarcón, Daniel L Pimentel-Alarcón
Poster
Wed 17:00 Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
Haozhi Qi, Xiaolong Wang, Deepak Pathak, Yi Ma, Jitendra Malik
Poster
Wed 17:00 Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Mrigank Raman, Aaron Chan, Siddhant Agarwal, PeiFeng Wang, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
Poster
Wed 17:00 ALFWorld: Aligning Text and Embodied Environments for Interactive Learning
Mohit Shridhar, Eric Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, Matthew Hausknecht
Poster
Wed 17:00 Evolving Reinforcement Learning Algorithms
John Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Quoc V Le, Sergey Levine, Honglak Lee, Aleksandra Faust
Poster
Wed 17:00 Simple Augmentation Goes a Long Way: ADRL for DNN Quantization
Lin Ning, Guoyang Chen, Weifeng Zhang, Xipeng Shen
Poster
Wed 17:00 Beyond Categorical Label Representations for Image Classification
Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson
Poster
Wed 17:00 Estimating Lipschitz constants of monotone deep equilibrium models
Chirag Pabbaraju, Ezra Winston, Zico Kolter
Poster
Wed 17:00 CPR: Classifier-Projection Regularization for Continual Learning
Sungmin Cha, Hsiang Hsu, Taebaek Hwang, Flavio Calmon, Taesup Moon
Spotlight
Wed 20:40 Undistillable: Making A Nasty Teacher That CANNOT teach students
Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang
Spotlight
Wed 20:50 CPT: Efficient Deep Neural Network Training via Cyclic Precision
Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
Poster
Thu 1:00 Contrastive Divergence Learning is a Time Reversal Adversarial Game
Omer Yair, Tomer Michaeli
Poster
Thu 1:00 Counterfactual Generative Networks
Axel Sauer, Andreas Geiger
Poster
Thu 1:00 Free Lunch for Few-shot Learning: Distribution Calibration
Shuo Yang, Lu Liu, Min Xu
Poster
Thu 1:00 AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights
Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha
Poster
Thu 1:00 Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues
(Henry) Hung Le, Nancy F Chen, Steven Hoi
Poster
Thu 1:00 Practical Massively Parallel Monte-Carlo Tree Search Applied to Molecular Design
Xiufeng Yang, Tanuj Aasawat, Kazuki Yoshizoe
Poster
Thu 1:00 CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning
Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Manuel Wuthrich, Yoshua Bengio, Bernhard Schoelkopf, Stefan Bauer
Poster
Thu 1:00 What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Hussenot Hussenot-Desenonges, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem
Poster
Thu 1:00 Efficient Inference of Flexible Interaction in Spiking-neuron Networks
Feng Zhou, Yixuan Zhang, Jun Zhu
Poster
Thu 1:00 Conditional Generative Modeling via Learning the Latent Space
Sameera Ramasinghe, Kanchana Ranasinghe, Salman Khan, Nick Barnes, Stephen Gould
Oral
Thu 3:00 What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Raphaël Marinier, Hussenot Hussenot-Desenonges, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem
Invited Talk
Thu 8:00 Soft bodied robots for human centered design of robots for everyday life
Kyu Jin Cho
Poster
Thu 9:00 Enforcing robust control guarantees within neural network policies
Priya Donti, Melrose Roderick, Mahyar Fazlyab, Zico Kolter
Poster
Thu 9:00 Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
Zirui Wang, Yulia Tsvetkov, Orhan Firat, Yuan Cao
Poster
Thu 9:00 Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis
Rafael Valle, Kevin J Shih, Ryan Prenger, Bryan Catanzaro
Poster
Thu 9:00 Dynamic Tensor Rematerialization
Marisa Kirisame, Steven S. Lyubomirsky, Altan Haan, Jennifer Brennan, Mike He, Jared G Roesch, Tianqi Chen, Zachary Tatlock
Poster
Thu 9:00 Deep Networks and the Multiple Manifold Problem
Sam Buchanan, Dar Gilboa, John Wright
Poster
Thu 9:00 Meta-learning with negative learning rates
Alberto Bernacchia
Poster
Thu 9:00 Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning
Xuebo Liu, Longyue Wang, Derek Wong, Liam Ding, Lidia Chao, Zhaopeng Tu
Poster
Thu 9:00 Differentially Private Learning Needs Better Features (or Much More Data)
Florian Tramer, Dan Boneh
Poster
Thu 9:00 Initialization and Regularization of Factorized Neural Layers
Misha Khodak, Neil Tenenholtz, Lester Mackey, Nicolo Fusi
Poster
Thu 9:00 Directed Acyclic Graph Neural Networks
Veronika Thost, Jie Chen
Poster
Thu 9:00 Blending MPC & Value Function Approximation for Efficient Reinforcement Learning
Mohak Bhardwaj, Sanjiban Choudhury, Byron Boots
Poster
Thu 9:00 Learning to Set Waypoints for Audio-Visual Navigation
Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan Gao, Santhosh Kumar Ramakrishnan, Kristen Grauman
Poster
Thu 9:00 Deconstructing the Regularization of BatchNorm
Yann Dauphin, Ekin Cubuk
Poster
Thu 9:00 Linear Last-iterate Convergence in Constrained Saddle-point Optimization
Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo
Oral
Thu 13:15 Self-training For Few-shot Transfer Across Extreme Task Differences
Cheng Phoo, Bharath Hariharan
Spotlight
Thu 13:30 A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong, Yigitcan Kaya, Ionut-Vlad Modoranu, Tudor Dumitras
Spotlight
Thu 13:40 BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
Augustus Odena, Kensen Shi, David Bieber, Rishabh Singh, Charles Sutton, Hanjun Dai
Poster
Thu 17:00 $i$-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
Poster
Thu 17:00 Combining Label Propagation and Simple Models out-performs Graph Neural Networks
Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson
Poster
Thu 17:00 On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis
Zhong Li, Jiequn Han, Weinan E, Qianxiao Li
Poster
Thu 17:00 Combining Ensembles and Data Augmentation Can Harm Your Calibration
Yeming Wen, Ghassen Jerfel, Rafael Müller, Michael W Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran
Poster
Thu 17:00 Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
Wenhan Xiong, Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, Barlas Oguz
Poster
Thu 17:00 Randomized Automatic Differentiation
Deniz Oktay, Nick McGreivy, Joshua Aduol, Alex Beatson, Ryan P Adams
Poster
Thu 17:00 No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Will Grathwohl, Jacob Kelly, Milad Hashemi, Mohammad Norouzi, Kevin Swersky, David Duvenaud
Poster
Thu 17:00 CPT: Efficient Deep Neural Network Training via Cyclic Precision
Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
Poster
Thu 17:00 Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Colin Wei, Kendrick Shen, Yining Chen, Tengyu Ma
Poster
Thu 17:00 Fast Geometric Projections for Local Robustness Certification
Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, Corina Pasareanu
Oral
Thu 19:00 Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Colin Wei, Kendrick Shen, Yining Chen, Tengyu Ma
Spotlight
Thu 20:58 Learning a Latent Simplex in Input Sparsity Time
Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David Woodruff, Samson Zhou
Workshop
Fri 6:14 Density Approximation in Deep Generative Models with Kernel Transfer Operators
Zhichun Huang
Workshop
Fri 6:18 Adversarial Data Augmentation Improves Unsupervised Machine Learning
Chia-Yi Hsu
Workshop
Fri 6:22 On Adversarial Robustness: A Neural Architecture Search perspective
Chaitanya Devaguptapu
Workshop
Fri 10:30 Gal Mishne: Visualizing the PHATE of deep neural networks
Gal Mishne
Workshop
Fri 11:35 Spotlight 7: Emilien Dupont, COIN: COmpression with Implicit Neural representations
Workshop
Fri 11:36 Continuous Weight Balancing
Daniel J Wu
Workshop
Fri 11:52 DeepSMOTE: Deep Learning for Imbalanced Data
Bartosz Krawczyk
Workshop
Reset-Free Reinforcement Learning via Multi-Task Learning: Learning Dexterous Manipulation Behaviors without Human Intervention
Abhishek Gupta, Justin Yu, Vikash Kumar, Tony Zhao, Kelvin Xu, Aaron Rovinsky, Thomas Devlin, Sergey Levine
Workshop
Simple Transparent Adversarial Examples
Jaydeep Borkar
Workshop
Distributed Gaussian Differential Privacy Via Shuffling
Kan Chen, Qi Long
Workshop
Membership Inference Attack on Graph Neural Networks
Iyiola Emmanuel Olatunji, Wolfgang Nejdl, Megha Khosla
Workshop
Practical Defences Against Model Inversion Attacks for Split Neural Networks
Tom Titcombe, Adam Hall, Pavlos Papadopoulos, Daniele Romanini
Workshop
PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN
Daniele Romanini, Adam Hall, Pavlos Papadopoulos, Tom Titcombe, Abbas Ismail, Tudor Cebere, Robert Sandmann, Robin Roehm, Michael Hoeh
Workshop
A Graphical Model Perspective on Federated Learning
Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
Workshop
Gradient-Masked Federated Optimization
Irene Tenison, Sreya Francis, Irina Rish
Workshop
Self-Constructing Neural Networks through Random Mutation
Samuel Schmidgall