Topic Keywords

[ $\ell_1$ norm ] [ $f-$divergence ] [ 3D Convolution ] [ 3D deep learning ] [ 3D generation ] [ 3d point cloud ] [ 3D Reconstruction ] [ 3D scene understanding ] [ 3D shape representations ] [ 3D shapes learning ] [ 3D vision ] [ 3D Vision ] [ abstract reasoning ] [ abstract rules ] [ Acceleration ] [ accuracy ] [ acoustic condition modeling ] [ Action localization ] [ action recognition ] [ activation maximization ] [ activation strategy. ] [ Active learning ] [ Active Learning ] [ AdaBoost ] [ adaptive heavy-ball methods ] [ Adaptive Learning ] [ adaptive methods ] [ adaptive optimization ] [ ADMM ] [ Adversarial Accuracy ] [ Adversarial Attack ] [ Adversarial Attacks ] [ adversarial attacks/defenses ] [ Adversarial computer programs ] [ Adversarial Defense ] [ Adversarial Example Detection ] [ Adversarial Examples ] [ Adversarial Learning ] [ Adversarial Machine Learning ] [ adversarial patch ] [ Adversarial robustness ] [ Adversarial Robustness ] [ Adversarial training ] [ Adversarial Training ] [ Adversarial Transferability ] [ aesthetic assessment ] [ affine parameters ] [ age estimation ] [ Aggregation Methods ] [ AI for earth science ] [ ALFRED ] [ Algorithm ] [ algorithmic fairness ] [ Algorithmic fairness ] [ Algorithms ] [ alignment ] [ alignment of semantic and visual space ] [ amortized inference ] [ Analogies ] [ annotation artifacts ] [ anomaly-detection ] [ Anomaly detection with deep neural networks ] [ anonymous walk ] [ appearance transfer ] [ approximate constrained optimization ] [ approximation ] [ Approximation ] [ Architectures ] [ argoverse ] [ Artificial Integlligence ] [ ASR ] [ assistive technology ] [ associative memory ] [ Associative Memory ] [ asynchronous parallel algorithm ] [ Atari ] [ Attention ] [ Attention Mechanism ] [ Attention Modules ] [ attractors ] [ attributed walks ] [ Auction Theory ] [ audio understanding ] [ Audio-Visual ] [ audio visual learning ] [ audio-visual representation ] [ audio-visual representation learning ] [ Audio-visual sound separation ] [ audiovisual synthesis ] [ augmented deep reinforcement learning ] [ autodiff ] [ Autoencoders ] [ automated data augmentation ] [ automated machine learning ] [ automatic differentiation ] [ AutoML ] [ autonomous learning ] [ autoregressive language model ] [ Autoregressive Models ] [ AutoRL ] [ auxiliary information ] [ auxiliary latent variable ] [ Auxiliary Learning ] [ auxiliary task ] [ Average-case Analysis ] [ aversarial examples ] [ avoid knowledge leaking ] [ backdoor attack ] [ Backdoor Attacks ] [ Backdoor Defense ] [ Backgrounds ] [ backprop ] [ back translation ] [ backward error analysis ] [ bagging ] [ batchnorm ] [ Batch Normalization ] [ batch reinforcement learning ] [ Batch Reinforcement Learning ] [ batch selection ] [ Bayesian ] [ Bayesian classification ] [ Bayesian inference ] [ Bayesian Inference ] [ Bayesian networks ] [ Bayesian Neural Networks ] [ behavior cloning ] [ belief-propagation ] [ Benchmark ] [ benchmarks ] [ benign overfitting ] [ bert ] [ BERT ] [ beta-VAE ] [ better generalization ] [ biased sampling ] [ biases ] [ Bias in Language Models ] [ bidirectional ] [ bilevel optimization ] [ Bilinear games ] [ Binary Embeddings ] [ Binary Neural Networks ] [ binaural audio ] [ binaural speech ] [ biologically plausible ] [ Biometrics ] [ bisimulation ] [ Bisimulation ] [ bisimulation metrics ] [ bit-flip ] [ bit-level sparsity ] [ blind denoising ] [ blind spots ] [ block mdp ] [ boosting ] [ bottleneck ] [ bptt ] [ branch and bound ] [ Brownian motion ] [ Budget-Aware Pruning ] [ Budget constraints ] [ Byzantine resilience ] [ Byzantine SGD ] [ CAD modeling ] [ calibration ] [ Calibration ] [ calibration measure ] [ cancer research ] [ Capsule Networks ] [ Catastrophic forgetting ] [ Catastrophic Forgetting ] [ Causal Inference ] [ Causality ] [ Causal network ] [ certificate ] [ certified defense ] [ Certified Robustness ] [ challenge sets ] [ change of measure ] [ change point detection ] [ channel suppressing ] [ Channel Tensorization ] [ Channel-Wise Approximated Activation ] [ Chaos ] [ chebyshev polynomial ] [ checkpointing ] [ Checkpointing ] [ chemistry ] [ CIFAR ] [ Classification ] [ class imbalance ] [ clean-label ] [ Clustering ] [ Clusters ] [ CNN ] [ CNNs ] [ Code Compilation ] [ Code Representations ] [ Code Structure ] [ code summarization ] [ Code Summarization ] [ Cognitively-inspired Learning ] [ cold posteriors ] [ collaborative learning ] [ Combinatorial optimization ] [ common object counting ] [ commonsense question answering ] [ Commonsense Reasoning ] [ Communication Compression ] [ co-modulation ] [ complete verifiers ] [ complex query answering ] [ Composition ] [ compositional generalization ] [ compositional learning ] [ compositional task ] [ Compressed videos ] [ Compressing Deep Networks ] [ Compression ] [ computation ] [ computational biology ] [ Computational Biology ] [ computational complexity ] [ Computational imaging ] [ Computational neuroscience ] [ Computational resources ] [ computer graphics ] [ Computer Vision ] [ concentration ] [ Concentration of Measure ] [ Concept-based Explanation ] [ concept drift ] [ Concept Learning ] [ conditional expectation ] [ Conditional GANs ] [ Conditional Generation ] [ Conditional generative adversarial networks ] [ conditional layer normalization ] [ Conditional Neural Processes ] [ Conditional Risk Minimization ] [ Conditional Sampling ] [ conditional text generation ] [ Conferrability ] [ confidentiality ] [ conformal inference ] [ conformal prediction ] [ conjugacy ] [ conservation law ] [ consistency ] [ consistency training ] [ Consistency Training ] [ constellation models ] [ constrained beam search ] [ Constrained optimization ] [ constrained RL ] [ constraints ] [ constraint satisfaction ] [ contact tracing ] [ Contextual Bandits ] [ Contextual embedding space ] [ Continual learning ] [ Continual Learning ] [ continuation method ] [ continuous and scalar conditions ] [ continuous case ] [ Continuous Control ] [ continuous convolution ] [ continuous games ] [ continuous normalizing flow ] [ continuous time ] [ Continuous-time System ] [ continuous treatment effect ] [ contrastive divergence ] [ Contrastive learning ] [ Contrastive Learning ] [ Contrastive Methods ] [ contrastive representation learning ] [ control barrier function ] [ controlled generation ] [ Controlled NLG ] [ Convergence ] [ Convergence Analysis ] [ convex duality ] [ Convex optimization ] [ ConvNets ] [ convolutional kernel methods ] [ Convolutional Layer ] [ convolutional models ] [ Convolutional Networks ] [ copositive programming ] [ corruptions ] [ COST ] [ Counterfactual inference ] [ counterfactuals ] [ Counterfactuals ] [ covariant neural networks ] [ covid-19 ] [ COVID-19 ] [ Cross-domain ] [ cross-domain few-shot learning ] [ cross-domain video generation ] [ cross-episode attention ] [ cross-fitting ] [ cross-lingual pretraining ] [ Cryptographic inference ] [ cultural transmission ] [ Curriculum Learning ] [ curse of memory ] [ curvature estimates ] [ custom voice ] [ cycle-consistency regularization ] [ cycle-consistency regularizer ] [ DAG ] [ DARTS stability ] [ Data augmentation ] [ Data Augmentation ] [ data cleansing ] [ Data-driven modeling ] [ data-efficient learning ] [ data-efficient RL ] [ Data Flow ] [ data labeling ] [ data parallelism ] [ Data Poisoning ] [ Data Protection ] [ Dataset ] [ dataset bias ] [ dataset compression ] [ dataset condensation ] [ dataset corruption ] [ dataset distillation ] [ dataset summarization ] [ data structures ] [ debiased training ] [ debugging ] [ Decentralized Optimization ] [ decision boundary geometry ] [ decision trees ] [ declarative knowledge ] [ deep-anomaly-detection ] [ Deep Architectures ] [ Deep denoising priors ] [ deep embedding ] [ Deep Ensembles ] [ deep equilibrium models ] [ Deep Equilibrium Models ] [ Deepfake ] [ deep FBSDEs ] [ Deep Gaussian Processes ] [ Deep generative model ] [ Deep generative modeling ] [ Deep generative models ] [ deeplearning ] [ Deep learning ] [ Deep Learning ] [ deep learning dynamics ] [ Deep Learning Theory ] [ deep network training ] [ deep neural network ] [ deep neural networks. ] [ Deep Neural Networks ] [ deep one-class classification ] [ deep Q-learning ] [ Deep reinforcement learning ] [ Deep Reinforcement Learning ] [ deep ReLU networks ] [ Deep residual neural networks ] [ deep RL ] [ deep sequence model ] [ deepset ] [ Deep Sets ] [ Deformation Modeling ] [ delay ] [ Delay differential equations ] [ denoising score matching ] [ Dense Retrieval ] [ Density estimation ] [ Density Estimation ] [ Density ratio estimation ] [ dependency based method ] [ deployment-efficiency ] [ depression ] [ depth separation ] [ descent ] [ description length ] [ determinantal point processes ] [ Device Placement ] [ dialogue state tracking ] [ differentiable optimization ] [ Differentiable physics ] [ Differentiable Physics ] [ Differentiable program generator ] [ differentiable programming ] [ Differentiable rendering ] [ Differentiable simulation ] [ differential dynamica programming ] [ differential equations ] [ Differential Geometry ] [ differentially private deep learning ] [ Differential Privacy ] [ diffusion probabilistic models ] [ diffusion process ] [ dimension ] [ Directed Acyclic Graphs ] [ Dirichlet form ] [ Discrete Optimization ] [ discretization error ] [ disentangled representation learning ] [ Disentangled representation learning ] [ Disentanglement ] [ distance ] [ Distillation ] [ distinct elements ] [ Distributed ] [ distributed deep learning ] [ distributed inference ] [ Distributed learning ] [ distributed machine learning ] [ Distributed ML ] [ Distributed Optimization ] [ distributional robust optimization ] [ distribution estimation ] [ distribution shift ] [ diverse strategies ] [ diverse video generation ] [ Diversity denoising ] [ Diversity Regularization ] [ DNN ] [ DNN compression ] [ document analysis ] [ document classification ] [ document retrieval ] [ domain adaptation theory ] [ Domain Adaption ] [ Domain Generalization ] [ domain randomization ] [ Domain Translation ] [ double descent ] [ Double Descent ] [ doubly robustness ] [ Doubly-weighted Laplace operator ] [ Dropout ] [ drug discovery ] [ Drug discovery ] [ dst ] [ Dual-mode ASR ] [ Dueling structure ] [ Dynamical Systems ] [ dynamic computation graphs ] [ dynamics ] [ dynamics prediction ] [ dynamic systems ] [ Early classification ] [ Early pruning ] [ early stopping ] [ EBM ] [ Edit ] [ EEG ] [ effective learning rate ] [ Efficiency ] [ Efficient Attention Mechanism ] [ efficient deep learning ] [ Efficient Deep Learning ] [ Efficient Deep Learning Inference ] [ Efficient ensembles ] [ efficient inference ] [ efficient inference methods ] [ Efficient Inference Methods ] [ EfficientNets ] [ efficient network ] [ Efficient Networks ] [ Efficient training ] [ Efficient Training ] [ efficient training and inference. ] [ egocentric ] [ eigendecomposition ] [ Eigenspectrum ] [ ELBO ] [ electroencephalography ] [ EM ] [ Embedding Models ] [ Embedding Size ] [ Embodied Agents ] [ embodied vision ] [ emergent behavior ] [ empirical analysis ] [ Empirical Game Theory ] [ empirical investigation ] [ Empirical Investigation ] [ empirical study ] [ empowerment ] [ Encoder layer fusion ] [ end-to-end entity linking ] [ End-to-End Object Detection ] [ Energy ] [ Energy-Based GANs ] [ energy based model ] [ energy-based model ] [ Energy-based model ] [ energy based models ] [ Energy-based Models ] [ Energy Based Models ] [ Energy-Based Models ] [ Energy Score ] [ ensemble ] [ Ensemble ] [ ensemble learning ] [ ensembles ] [ Ensembles ] [ entity disambiguation ] [ entity linking ] [ entity retrieval ] [ entropic algorithms ] [ Entropy Maximization ] [ Entropy Model ] [ entropy regularization ] [ epidemiology ] [ episode-level pretext task ] [ episodic training ] [ equilibrium ] [ equivariant ] [ equivariant neural network ] [ ERP ] [ Evaluation ] [ evaluation of interpretability ] [ Event localization ] [ evolution ] [ Evolutionary algorithm ] [ Evolutionary Algorithm ] [ Evolutionary Algorithms ] [ Excess risk ] [ experience replay buffer ] [ experimental evaluation ] [ Expert Models ] [ Explainability ] [ explainable ] [ Explainable AI ] [ Explainable Model ] [ explaining decision-making ] [ explanation method ] [ explanations ] [ Explanations ] [ Exploration ] [ Exponential Families ] [ exponential tilting ] [ exposition ] [ external memory ] [ Extrapolation ] [ extremal sector ] [ facial recognition ] [ factor analysis ] [ factored MDP ] [ Factored MDP ] [ fairness ] [ Fairness ] [ faithfulness ] [ fast DNN inference ] [ fast learning rate ] [ fast-mapping ] [ fast weights ] [ FAVOR ] [ Feature Attribution ] [ feature propagation ] [ features ] [ feature visualization ] [ Feature Visualization ] [ Federated learning ] [ Federated Learning ] [ Few Shot ] [ few-shot concept learning ] [ few-shot domain generalization ] [ Few-shot learning ] [ Few Shot Learning ] [ fine-tuning ] [ finetuning ] [ Fine-tuning ] [ Finetuning ] [ fine-tuning stability ] [ Fingerprinting ] [ First-order Methods ] [ first-order optimization ] [ fisher ratio ] [ flat minima ] [ Flexibility ] [ flow graphs ] [ Fluid Dynamics ] [ Follow-the-Regularized-Leader ] [ Formal Verification ] [ forward mode ] [ Fourier Features ] [ Fourier transform ] [ framework ] [ Frobenius norm ] [ from-scratch ] [ frontend ] [ fruit fly ] [ fully-connected ] [ Fully-Connected Networks ] [ future frame generation ] [ future link prediction ] [ fuzzy tiling activation function ] [ Game Decomposition ] [ Game Theory ] [ GAN ] [ GAN compression ] [ GANs ] [ Garbled Circuits ] [ Gaussian Copula ] [ Gaussian Graphical Model ] [ Gaussian Isoperimetric Inequality ] [ Gaussian mixture model ] [ Gaussian process ] [ Gaussian Process ] [ Gaussian Processes ] [ gaussian process priors ] [ GBDT ] [ generalisation ] [ Generalization ] [ Generalization Bounds ] [ generalization error ] [ Generalization Measure ] [ Generalization of Reinforcement Learning ] [ generalized ] [ generalized Girsanov theorem ] [ Generalized PageRank ] [ Generalized zero-shot learning ] [ Generation ] [ Generative Adversarial Network ] [ Generative Adversarial Networks ] [ generative art ] [ Generative Flow ] [ Generative Model ] [ Generative modeling ] [ Generative Modeling ] [ generative modelling ] [ Generative Modelling ] [ Generative models ] [ Generative Models ] [ genetic programming ] [ Geodesic-Aware FC Layer ] [ geometric ] [ Geometric Deep Learning ] [ G-invariance regularization ] [ global ] [ global optima ] [ Global Reference ] [ glue ] [ GNN ] [ GNNs ] [ goal-conditioned reinforcement learning ] [ goal-conditioned RL ] [ goal reaching ] [ gradient ] [ gradient alignment ] [ Gradient Alignment ] [ gradient boosted decision trees ] [ gradient boosting ] [ gradient decomposition ] [ Gradient Descent ] [ gradient descent-ascent ] [ gradient flow ] [ Gradient flow ] [ gradient flows ] [ gradient redundancy ] [ Gradient stability ] [ Grammatical error correction ] [ Granger causality ] [ Graph ] [ graph classification ] [ graph coarsening ] [ Graph Convolutional Network ] [ Graph Convolutional Neural Networks ] [ graph edit distance ] [ Graph Generation ] [ Graph Generative Model ] [ graph-level prediction ] [ graph networks ] [ Graph neural network ] [ Graph Neural Network ] [ Graph neural networks ] [ Graph Neural Networks ] [ Graph pooling ] [ graph representation learning ] [ Graph representation learning ] [ Graph Representation Learning ] [ graph shift operators ] [ graph-structured data ] [ graph structure learning ] [ Greedy Learning ] [ grid cells ] [ grounding ] [ group disparities ] [ group equivariance ] [ Group Equivariance ] [ Group Equivariant Convolution ] [ group equivariant self-attention ] [ group equivariant transformers ] [ group sparsity ] [ Group-supervised learning ] [ gumbel-softmax ] [ Hamiltonian systems ] [ hard-label attack ] [ hard negative mining ] [ hard negative sampling ] [ Hardware-Aware Neural Architecture Search ] [ Harmonic Analysis ] [ harmonic distortion analysis ] [ healthcare ] [ Healthcare ] [ heap allocation ] [ Hessian matrix ] [ Heterogeneity ] [ Heterogeneous ] [ heterogeneous data ] [ Heterogeneous data ] [ Heterophily ] [ heteroscedasticity ] [ heuristic search ] [ hidden-parameter mdp ] [ hierarchical contrastive learning ] [ Hierarchical Imitation Learning ] [ Hierarchical Multi-Agent Learning ] [ Hierarchical Networks ] [ Hierarchical Reinforcement Learning ] [ Hierarchy-Aware Classification ] [ high-dimensional asymptotics ] [ high-dimensional statistic ] [ high-resolution video generation ] [ hindsight relabeling ] [ histogram binning ] [ historical color image classification ] [ HMC ] [ homomorphic encryption ] [ Homophily ] [ Hopfield layer ] [ Hopfield networks ] [ Hopfield Networks ] [ human-AI collaboration ] [ human cognition ] [ human-computer interaction ] [ human preferences ] [ human psychophysics ] [ humans in the loop ] [ hybrid systems ] [ Hyperbolic ] [ hyperbolic deep learning ] [ Hyperbolic Geometry ] [ hypercomplex representation learning ] [ hypergradients ] [ Hypernetworks ] [ hyperparameter ] [ Hyperparameter Optimization ] [ Hyper-Parameter Optimization ] [ HYPERPARAMETER OPTIMIZATION ] [ Image Classification ] [ image completion ] [ Image compression ] [ Image Editing ] [ Image Generation ] [ Image manipulation ] [ Image Modeling ] [ ImageNet ] [ image reconstruction ] [ Image segmentation ] [ Image Synthesis ] [ image-to-action learning ] [ Image-to-Image Translation ] [ image translation ] [ image warping ] [ imbalanced learning ] [ Imitation Learning ] [ Impartial Learning ] [ implicit bias ] [ Implicit Bias ] [ Implicit Deep Learning ] [ implicit differentiation ] [ implicit functions ] [ implicit neural representations ] [ Implicit Neural Representations ] [ Implicit Representation ] [ Importance Weighting ] [ impossibility ] [ incoherence ] [ Incompatible Environments ] [ Incremental Tree Transformations ] [ independent component analysis ] [ indirection ] [ Individual mediation effects ] [ Inductive Bias ] [ inductive biases ] [ inductive representation learning ] [ infinitely wide neural network ] [ Infinite-Width Limit ] [ infinite-width networks ] [ influence functions ] [ Influence Functions ] [ Information bottleneck ] [ Information Bottleneck ] [ Information Geometry ] [ information-theoretical probing ] [ Information theory ] [ Information Theory ] [ Initialization ] [ input-adaptive multi-exit neural networks ] [ input convex neural networks ] [ input-convex neural networks ] [ InstaHide ] [ Instance adaptation ] [ instance-based label noise ] [ Instance learning ] [ Instance-wise Learning ] [ Instrumental Variable Regression ] [ integral probability metric ] [ intention ] [ interaction networks ] [ Interactions ] [ interactive fiction ] [ Internet of Things ] [ Interpolation Peak ] [ Interpretability ] [ interpretable latent representation ] [ Interpretable Machine Learning ] [ interpretable policy learning ] [ in-the-wild data ] [ Intrinsically Motivated Reinforcement Learning ] [ Intrinsic Motivation ] [ intrinsic motivations ] [ Intrinsic Reward ] [ Invariance and Equivariance ] [ invariance penalty ] [ invariances ] [ Invariant and equivariant deep networks ] [ Invariant Representations ] [ invariant risk minimization ] [ Invariant subspaces ] [ inverse graphics ] [ Inverse reinforcement learning ] [ Inverse Reinforcement Learning ] [ Inverted Index ] [ irl ] [ IRM ] [ irregularly spaced time series ] [ irregular-observed data modelling ] [ isometric ] [ Isotropy ] [ iterated learning ] [ iterative training ] [ JEM ] [ Johnson-Lindenstrauss Transforms ] [ kernel ] [ Kernel Learning ] [ kernel method ] [ kernel-ridge regression ] [ kernels ] [ keypoint localization ] [ Knowledge distillation ] [ Knowledge Distillation ] [ Knowledge factorization ] [ Knowledge Graph Reasoning ] [ knowledge uncertainty ] [ Kullback-Leibler divergence ] [ Kurdyka-Łojasiewicz geometry ] [ label noise robustness ] [ Label Representation ] [ Label shift ] [ label smoothing ] [ Langevin dynamics ] [ Langevin sampling ] [ Language Grounding ] [ Language Model ] [ Language modeling ] [ Language Modeling ] [ Language Modelling ] [ Language Model Pre-training ] [ language processing ] [ language-specific modeling ] [ Laplace kernel ] [ Large-scale ] [ Large-scale Deep Learning ] [ large scale learning ] [ Large-scale Machine Learning ] [ large-scale pre-trained language models ] [ large-scale training ] [ large vocabularies ] [ Last-iterate Convergence ] [ Latency-aware Neural Architecture Search ] [ Latent Simplex ] [ latent space of GANs ] [ Latent Variable Models ] [ lattices ] [ Layer order ] [ layerwise sparsity ] [ learnable ] [ learned algorithms ] [ Learned compression ] [ learned ISTA ] [ Learning ] [ learning action representations ] [ learning-based ] [ learning dynamics ] [ Learning Dynamics ] [ Learning in Games ] [ learning mechanisms ] [ Learning physical laws ] [ Learning Theory ] [ Learning to Hash ] [ learning to optimize ] [ Learning to Optimize ] [ learning to rank ] [ Learning to Rank ] [ learning to teach ] [ learning with noisy labels ] [ Learning with noisy labels ] [ library ] [ lifelong ] [ Lifelong learning ] [ Lifelong Learning ] [ lifted inference ] [ likelihood-based models ] [ likelihood-free inference ] [ limitations ] [ limited data ] [ linear bandits ] [ Linear Convergence ] [ linear estimator ] [ Linear Regression ] [ linear terms ] [ linformer ] [ Lipschitz constants ] [ Lipschitz constrained networks ] [ Local Explanations ] [ locality sensitive hashing ] [ Locally supervised training ] [ local Rademacher complexity ] [ log-concavity ] [ Logic ] [ Logic Rules ] [ logsignature ] [ Long-Tailed Recognition ] [ long-tail learning ] [ Long-term dependencies ] [ long-term prediction ] [ long-term stability ] [ loss correction ] [ Loss function search ] [ Loss Function Search ] [ lossless source compression ] [ Lottery Ticket ] [ Lottery Ticket Hypothesis ] [ lottery tickets ] [ low-dimensional structure ] [ lower bound ] [ lower bounds ] [ Low-latency ASR ] [ low precision training ] [ low rank ] [ low-rank approximation ] [ low-rank tensors ] [ L-smoothness ] [ LSTM ] [ Lyapunov Chaos ] [ Machine learning ] [ Machine Learning ] [ machine learning for code ] [ Machine Learning for Robotics ] [ Machine Learning (ML) for Programming Languages (PL)/Software Engineering (SE) ] [ machine learning systems ] [ Machine translation ] [ Machine Translation ] [ magnitude-based pruning ] [ Manifold clustering ] [ Manifolds ] [ Many-task ] [ mapping ] [ Markov chain Monte Carlo ] [ Markov Chain Monte Carlo ] [ Markov jump process ] [ Masked Reconstruction ] [ mathematical reasoning ] [ Matrix and Tensor Factorization ] [ matrix completion ] [ matrix decomposition ] [ Matrix Factorization ] [ max-margin ] [ MCMC ] [ MCMC sampling ] [ mean estimation ] [ mean-field dynamics ] [ mean separation ] [ Mechanism Design ] [ medical time series ] [ mel-filterbanks ] [ memorization ] [ Memorization ] [ Memory ] [ memory efficient ] [ memory efficient training ] [ Memory Mapping ] [ memory optimized training ] [ Memory-saving ] [ mesh ] [ Message Passing ] [ Message Passing GNNs ] [ meta-gradients ] [ Meta-learning ] [ Meta Learning ] [ Meta-Learning ] [ Metric Surrogate ] [ minimax optimal rate ] [ Minimax Optimization ] [ minimax risk ] [ Minmax ] [ min-max optimization ] [ mirror-prox ] [ Missing Data Inference ] [ Missing value imputation ] [ Missing Values ] [ misssing data ] [ mixed precision ] [ Mixed Precision ] [ Mixed-precision quantization ] [ mixture density nets ] [ mixture of experts ] [ mixup ] [ Mixup ] [ MixUp ] [ MLaaS ] [ MoCo ] [ Model Attribution ] [ model-based control ] [ model-based learning ] [ Model-based Reinforcement Learning ] [ Model-Based Reinforcement Learning ] [ model-based RL ] [ Model-based RL ] [ Model Biases ] [ Model compression ] [ model extraction ] [ model fairness ] [ Model Inversion ] [ model order reduction ] [ model ownership ] [ model predictive control ] [ model-predictive control ] [ Model Predictive Control ] [ Model privacy ] [ Models for code ] [ models of learning and generalization ] [ Model stealing ] [ Modern Hopfield Network ] [ modern Hopfield networks ] [ modified equation analysis ] [ modular architectures ] [ Modular network ] [ modular networks ] [ modular neural networks ] [ modular representations ] [ modulated convolution ] [ Molecular conformation generation ] [ molecular design ] [ Molecular Dynamics ] [ molecular graph generation ] [ Molecular Representation ] [ Molecule Design ] [ Momentum ] [ momentum methods ] [ momentum optimizer ] [ monotonicity ] [ Monte Carlo ] [ Monte-Carlo tree search ] [ Monte Carlo Tree Search ] [ morphology ] [ Morse theory ] [ mpc ] [ Multi-agent ] [ Multi-agent games ] [ Multiagent Learning ] [ multi-agent platform ] [ Multi-Agent Policy Gradients ] [ Multi-agent reinforcement learning ] [ Multi-agent Reinforcement Learning ] [ Multi-Agent Reinforcement Learning ] [ Multi-Agent Transfer Learning ] [ multiclass classification ] [ multi-dimensional discrete action spaces ] [ Multi-domain ] [ multi-domain disentanglement ] [ multi-head attention ] [ Multi-Hop ] [ multi-hop question answering ] [ Multi-hop Reasoning ] [ Multilingual Modeling ] [ multilingual representations ] [ multilingual transformer ] [ multilingual translation ] [ Multimodal ] [ Multi-Modal ] [ Multimodal Attention ] [ multi-modal learning ] [ Multimodal Learning ] [ Multi-Modal Learning ] [ Multimodal Spaces ] [ Multi-objective optimization ] [ multi-player ] [ Multiplicative Weights Update ] [ Multi-scale Representation ] [ multitask ] [ Multi-task ] [ Multi-task Learning ] [ Multi Task Learning ] [ Multi-Task Learning ] [ multi-task learning theory ] [ Multitask Reinforcement Learning ] [ Multi-view Learning ] [ Multi-View Learning ] [ Multi-view Representation Learning ] [ Mutual Information ] [ MuZero ] [ Named Entity Recognition ] [ NAS ] [ nash ] [ natural gradient descent ] [ Natural Language Processing ] [ natural scene statistics ] [ natural sparsity ] [ Negative Sampling ] [ negotiation ] [ nested optimization ] [ network architecture ] [ Network Architecture ] [ Network Inductive Bias ] [ network motif ] [ Network pruning ] [ Network Pruning ] [ networks ] [ network trainability ] [ network width ] [ Neural Architecture Search ] [ Neural Attention Distillation ] [ neural collapse ] [ Neural data compression ] [ Neural IR ] [ neural kernels ] [ neural link prediction ] [ Neural Model Explanation ] [ neural module network ] [ Neural Network ] [ Neural Network Bounding ] [ neural network calibration ] [ Neural Network Gaussian Process ] [ neural network robustness ] [ Neural networks ] [ Neural Networks ] [ neural network training ] [ Neural Network Verification ] [ neural ode ] [ Neural ODE ] [ Neural ODEs ] [ Neural operators ] [ Neural Physics Engines ] [ Neural Processes ] [ neural reconstruction ] [ neural sound synthesis ] [ neural spike train ] [ neural symbolic reasoning ] [ neural tangent kernel ] [ Neural tangent kernel ] [ Neural Tangent Kernel ] [ neural tangent kernels ] [ Neural text decoding ] [ neurobiology ] [ Neuroevolution ] [ Neuro symbolic ] [ Neuro-Symbolic Learning ] [ neuro-symbolic models ] [ NLI ] [ NLP ] [ Node Embeddings ] [ noise contrastive estimation ] [ Noise-contrastive learning ] [ Noise model ] [ noise robust learning ] [ Noisy Demonstrations ] [ noisy label ] [ Noisy Label ] [ Noisy Labels ] [ Non-asymptotic Confidence Intervals ] [ non-autoregressive generation ] [ nonconvex ] [ non-convex learning ] [ Non-Convex Optimization ] [ Non-IID ] [ nonlinear control theory ] [ nonlinear dynamical systems ] [ nonlinear Hawkes process ] [ nonlinear walk ] [ Non-Local Modules ] [ non-minimax optimization ] [ nonnegative PCA ] [ nonseparable Hailtonian system ] [ non-smooth models ] [ non-stationary stochastic processes ] [ no-regret learning ] [ normalized maximum likelihood ] [ normalize layer ] [ normalizers ] [ Normalizing Flow ] [ normalizing flows ] [ Normalizing flows ] [ Normalizing Flows ] [ normative models ] [ novelty-detection ] [ ntk ] [ number of linear regions ] [ numerical errors ] [ numerical linear algebra ] [ object-centric representations ] [ Object detection ] [ Object Detection ] [ object-keypoint representations ] [ ObjectNet ] [ Object Permanence ] [ Observational Imitation ] [ ODE ] [ offline ] [ offline/batch reinforcement learning ] [ off-line reinforcement learning ] [ offline reinforcement learning ] [ Offline Reinforcement Learning ] [ offline RL ] [ off-policy evaluation ] [ Off Policy Evaluation ] [ Off-policy policy evaluation ] [ Off-Policy Reinforcement Learning ] [ off-policy RL ] [ one-class-classification ] [ one-to-many mapping ] [ Open-domain ] [ open domain complex question answering ] [ open source ] [ Optimal Control Theory ] [ optimal convergence ] [ optimal power flow ] [ Optimal Transport ] [ optimal transport maps ] [ Optimisation for Deep Learning ] [ optimism ] [ Optimistic Gradient Descent Ascent ] [ Optimistic Mirror Decent ] [ Optimistic Multiplicative Weights Update ] [ Optimization ] [ order learning ] [ ordinary differential equation ] [ orthogonal ] [ orthogonal layers ] [ orthogonal machine learning ] [ Orthogonal Polynomials ] [ Oscillators ] [ outlier detection ] [ outlier-detection ] [ Outlier detection ] [ out-of-distribution ] [ Out-of-distribution detection in deep learning ] [ out-of-distribution generalization ] [ Out-of-domain ] [ over-fitting ] [ Overfitting ] [ overparameterisation ] [ over-parameterization ] [ Over-parameterization ] [ Overparameterization ] [ overparameterized neural networks ] [ Over-smoothing ] [ Oversmoothing ] [ over-squashing ] [ PAC Bayes ] [ padding ] [ parallel Monte Carlo Tree Search (MCTS) ] [ parallel tempering ] [ Parameter-Reduced MLR ] [ part-based ] [ Partial Amortization ] [ Partial differential equation ] [ partial differential equations ] [ partially observed environments ] [ particle inference ] [ pca ] [ pde ] [ pdes ] [ PDEs ] [ performer ] [ persistence diagrams ] [ personalized learning ] [ perturbation sets ] [ Peter-Weyl Theorem ] [ phase retrieval ] [ Physical parameter estimation ] [ physical reasoning ] [ physical scene understanding ] [ Physical Simulation ] [ physical symbol grounding ] [ physics ] [ physics-guided deep learning ] [ piecewise linear function ] [ pipeline toolkit ] [ plan-based reward shaping ] [ Planning ] [ Poincaré Ball Model ] [ Point cloud ] [ Point clouds ] [ point processes ] [ pointwise mutual information ] [ poisoning ] [ poisoning attack ] [ poisson matrix factorization ] [ policy learning ] [ Policy Optimization ] [ polynomial time ] [ Pose Estimation ] [ Position Embedding ] [ Position Encoding ] [ post-hoc calibration ] [ Post-Hoc Correction ] [ Post Training Quantization ] [ power grid management ] [ Predictive Modeling ] [ predictive uncertainty ] [ Predictive Uncertainty Estimation ] [ pretrained language model ] [ pretrained language model. ] [ pre-trained language model fine-tuning ] [ Pretrained Language Models ] [ Pretrained Text Encoders ] [ pre-training ] [ Pre-training ] [ Primitive Discovery ] [ principal components analysis ] [ Privacy ] [ privacy leakage from gradients ] [ privacy preserving machine learning ] [ Privacy-utility tradeoff ] [ probabelistic models ] [ probabilistic generative models ] [ probabilistic inference ] [ probabilistic matrix factorization ] [ Probabilistic Methods ] [ probabilistic multivariate forecasting ] [ probabilistic numerics ] [ probabilistic programs ] [ probably approximated correct guarantee ] [ Probe ] [ probing ] [ procedural generation ] [ procedural knowledge ] [ product of experts ] [ Product Quantization ] [ Program obfuscation ] [ Program Synthesis ] [ Proper Scoring Rules ] [ protein ] [ prototype propagation ] [ Provable Robustness ] [ provable sample efficiency ] [ proximal gradient descent-ascent ] [ proxy ] [ Pruning ] [ Pruning at initialization ] [ pseudo-labeling ] [ Pseudo-Labeling ] [ QA ] [ Q-learning ] [ Quantization ] [ quantum machine learning ] [ quantum mechanics ] [ Quantum Mechanics ] [ Question Answering ] [ random ] [ Random Feature ] [ Random Features ] [ Randomized Algorithms ] [ Random Matrix Theory ] [ Random Weights Neural Networks ] [ rank-collapse ] [ rank-constrained convex optimization ] [ rao ] [ rao-blackwell ] [ Rate-distortion optimization ] [ raven's progressive matrices ] [ real time recurrent learning ] [ real-world ] [ Real-world image denoising ] [ reasoning paths ] [ recommendation systems ] [ recommender system ] [ Recommender Systems ] [ recovery likelihood ] [ rectified linear unit ] [ Recurrent Generative Model ] [ Recurrent Neural Network ] [ Recurrent neural networks ] [ Recurrent Neural Networks ] [ recursive dense retrieval ] [ reformer ] [ regime agnostic methods ] [ Regression ] [ Regression without correspondence ] [ regret analysis ] [ regret minimization ] [ Regularization ] [ Regularization by denoising ] [ regularized markov decision processes ] [ Reinforcement ] [ Reinforcement learning ] [ Reinforcement Learning ] [ Reinforcement Learnings ] [ Reinforcement learning theory ] [ relabelling ] [ Relational regularized autoencoder ] [ Relation Extraction ] [ relaxed regularization ] [ relu network ] [ ReLU networks ] [ Rematerialization ] [ Render-and-Compare ] [ Reparameterization ] [ repetitions ] [ replica exchange ] [ representational learning ] [ representation analysis ] [ Representation learning ] [ Representation Learning ] [ representation learning for computer vision ] [ representation learning for robotics ] [ representation of dynamical systems ] [ Representation Theory ] [ reproducibility ] [ reproducible research ] [ Reproducing kernel Hilbert space ] [ resampling ] [ reset-free ] [ residual ] [ ResNets ] [ resource constrained ] [ Restricted Boltzmann Machines ] [ retraining ] [ Retrieval ] [ reverse accuracy ] [ reverse engineering ] [ reward learning ] [ reward randomization ] [ reward shaping ] [ reweighting ] [ Rich observation ] [ rich observations ] [ risk-averse ] [ Risk bound ] [ Risk Estimation ] [ risk sensitive ] [ rl ] [ RMSprop ] [ RNA-protein interaction prediction ] [ RNA structure ] [ RNA structure embedding ] [ RNN ] [ RNNs ] [ robotic manipulation ] [ robust ] [ robust control ] [ robust deep learning ] [ Robust Deep Learning ] [ robust learning ] [ Robust Learning ] [ Robust Machine Learning ] [ Robustness ] [ Robustness certificates ] [ Robust Overfitting ] [ ROC ] [ Role-Based Learning ] [ rooted graphs ] [ Rotation invariance ] [ rtrl ] [ Runtime Systems ] [ Saddle-point Optimization ] [ safe ] [ Safe exploration ] [ safe planning ] [ Saliency ] [ Saliency Guided Data Augmentation ] [ saliency maps ] [ SaliencyMix ] [ sample complexity separation ] [ Sample Efficiency ] [ sample information ] [ sample reweighting ] [ Sampling ] [ sampling algorithms ] [ Scalability ] [ Scale ] [ scale-invariant weights ] [ Scale of initialization ] [ scene decomposition ] [ scene generation ] [ Scene Understanding ] [ Science ] [ science of deep learning ] [ score-based generative models ] [ score matching ] [ score-matching ] [ SDE ] [ Second-order analysis ] [ second-order approximation ] [ second-order optimization ] [ Security ] [ segmented models ] [ selective classification ] [ Self-Imitation ] [ self supervised learning ] [ Self-supervised learning ] [ Self-supervised Learning ] [ Self Supervised Learning ] [ Self-Supervised Learning ] [ self-supervision ] [ self-training ] [ self-training theory ] [ semantic anomaly detection ] [ semantic directions in latent space ] [ semantic graphs ] [ Semantic Image Synthesis ] [ semantic parsing ] [ semantic role labeling ] [ semantic-segmentation ] [ Semantic Segmentation ] [ Semantic Textual Similarity ] [ semi-infinite duality ] [ semi-nonnegative matrix factorization ] [ semiparametric inference ] [ semi-supervised ] [ Semi-supervised Learning ] [ Semi-Supervised Learning ] [ semi-supervised learning theory ] [ Sentence Embeddings ] [ Sentence Representations ] [ Sentiment ] [ separation of variables ] [ Sequence Data ] [ Sequence Modeling ] [ sequence models ] [ Sequence-to-sequence learning ] [ sequence-to-sequence models ] [ sequential data ] [ Sequential probability ratio test ] [ Sequential Representation Learning ] [ set prediction ] [ set transformer ] [ SGD ] [ SGD noise ] [ sgld ] [ Shape ] [ shape bias ] [ Shape Bias ] [ Shape Encoding ] [ shapes ] [ Shapley values ] [ Sharpness Minimization ] [ side channel analysis ] [ Sigma Delta Quantization ] [ sign agnostic learning ] [ signal propagation ] [ signature ] [ sim2real ] [ sim2real transfer ] [ simple ] [ Singularity analysis ] [ singular value decomposition ] [ Sinkhorn algorithm ] [ skeleton-based action recognition ] [ sketch-based modeling ] [ sketches ] [ Skill Discovery ] [ SLAM ] [ sliced fused Gromov Wasserstein ] [ Sliced Wasserstein ] [ Slowdown attacks ] [ slowness ] [ Smooth games ] [ smoothing ] [ SMT Solvers ] [ social perception ] [ Soft Body ] [ soft labels ] [ software ] [ sound classification ] [ sound spatialization ] [ Source Code ] [ sparse Bayesian learning ] [ Sparse Embedding ] [ sparse embeddings ] [ sparse reconstruction ] [ sparse representation ] [ sparse representations ] [ sparse stochastic gates ] [ Sparsity ] [ Sparsity Learning ] [ spatial awareness ] [ spatial bias ] [ spatial uncertainty ] [ spatio-temporal forecasting ] [ spatio-temporal graph ] [ spatio-temporal modeling ] [ spatio-temporal modelling ] [ spatiotemporal prediction ] [ Spatiotemporal Understanding ] [ Spectral Analysis ] [ Spectral Distribution ] [ Spectral Graph Filter ] [ spectral regularization ] [ speech generation ] [ speech-impaired ] [ speech processing ] [ speech recognition. ] [ Speech Recognition ] [ spherical distributions ] [ spiking neural network ] [ spurious correlations ] [ square loss vs cross-entropy ] [ stability theory ] [ State abstraction ] [ state abstractions ] [ state-space models ] [ statistical learning theory ] [ Statistical Learning Theory ] [ statistical physics ] [ Statistical Physics ] [ statistical physics methods ] [ Steerable Kernel ] [ Stepsize optimization ] [ stochastic asymptotics ] [ stochastic control ] [ (stochastic) gradient descent ] [ Stochastic Gradient Descent ] [ stochastic gradient Langevin dynamics ] [ stochastic process ] [ Stochastic Processes ] [ stochastic subgradient method ] [ Storage Capacity ] [ straight-through ] [ straightthrough ] [ strategic behavior ] [ Streaming ASR ] [ structural biology ] [ structural credit assignment ] [ structural inductive bias ] [ Structured Pruning ] [ Structure learning ] [ structure prediction ] [ structures prediction ] [ Style Mixing ] [ Style Transfer ] [ subgraph reasoning. ] [ sublinear ] [ submodular optimization ] [ Subspace clustering ] [ Summarization ] [ summary statistics ] [ superpixel ] [ supervised contrastive learning ] [ Supervised Deep Networks ] [ Supervised Learning ] [ support estimation ] [ surprisal ] [ surrogate models ] [ svd ] [ SVD ] [ Symbolic Methods ] [ symbolic regression ] [ symbolic representations ] [ Symmetry ] [ symplectic networks ] [ Syntax ] [ Synthetic benchmark dataset ] [ synthetic-to-real generalization ] [ Systematic generalisation ] [ Systematicity ] [ System identification ] [ Tabular ] [ tabular data ] [ Tabular Data ] [ targeted attack ] [ Task Embeddings ] [ task generation ] [ task-oriented dialogue ] [ Task-oriented Dialogue System ] [ task reduction ] [ Task Segmentation ] [ Teacher-Student Learning ] [ teacher-student model ] [ temporal context ] [ Temporal knowledge graph ] [ temporal networks ] [ tensor product ] [ Text-based Games ] [ Text Representation ] [ Text Retrieval ] [ Text to speech ] [ Text to speech synthesis ] [ text-to-sql ] [ Texture ] [ Texture Bias ] [ Textworld ] [ Theorem proving ] [ theoretical issues in deep learning ] [ theoretical limits ] [ theoretical study ] [ Theory ] [ Theory of deep learning ] [ theory of mind ] [ Third-Person Imitation ] [ Thompson sampling ] [ time-frequency representations ] [ timescale ] [ timescales ] [ Time Series ] [ Time series forecasting ] [ time series prediction ] [ topic modelling ] [ Topology ] [ training dynamics ] [ Training Method ] [ trajectory ] [ trajectory optimization ] [ trajectory prediction ] [ Transferability ] [ Transfer learning ] [ Transfer Learning ] [ transformation invariance ] [ Transformer ] [ Transformers ] [ traveling salesperson problem ] [ Tree-structured Data ] [ trembl ] [ tropical function ] [ trust region ] [ two-layer neural network ] [ Uncertainty ] [ uncertainty calibration ] [ Uncertainty estimates ] [ Uncertainty estimation ] [ Uncertainty Machine Learning ] [ understanding ] [ understanding CNNs ] [ Understanding Data Augmentation ] [ understanding decision-making ] [ understanding deep learning ] [ Understanding Deep Learning ] [ understanding neural networks ] [ U-Net ] [ unidirectional ] [ uniprot ] [ universal approximation ] [ Universal approximation ] [ Universality ] [ universal representation learning ] [ universal sound separation ] [ unlabeled data ] [ Unlabeled Entity Problem ] [ Unlearnable Examples ] [ unrolled algorithms ] [ Unsupervised denoising ] [ Unsupervised Domain Translation ] [ unsupervised image denoising ] [ Unsupervised learning ] [ Unsupervised Learning ] [ unsupervised learning theory ] [ unsupervised loss ] [ Unsupervised Meta-learning ] [ unsupervised object discovery ] [ Unsupervised reinforcement learning ] [ unsupervised skill discovery ] [ unsupervised stabilization ] [ Upper Confidence bound applied to Trees (UCT) ] [ Usable Information ] [ VAE ] [ Value factorization ] [ value learning ] [ vanishing gradient problem ] [ variable binding ] [ variable convergence ] [ Variable Embeddings ] [ Variance Networks ] [ Variational Auto-encoder ] [ Variational autoencoders ] [ Variational Autoencoders ] [ Variational inference ] [ variational information bottleneck ] [ Verification ] [ video analysis ] [ Video Classification ] [ Video Compression ] [ video generation ] [ video-grounded dialogues ] [ Video prediction ] [ Video Reasoning ] [ video recognition ] [ Video Recognition ] [ video representation learning ] [ video synthesis ] [ video-text learning ] [ views ] [ virtual environment ] [ vision-and-language-navigation ] [ visual counting ] [ visualization ] [ visual perception ] [ Visual Reasoning ] [ visual reinforcement learning ] [ visual representation learning ] [ visual saliency ] [ vocoder ] [ voice conversion ] [ Volume Analysis ] [ VQA ] [ vulnerability of RL ] [ wanet ] [ warping functions ] [ Wasserstein ] [ wasserstein-2 barycenters ] [ wasserstein-2 distance ] [ Wasserstein distance ] [ waveform generation ] [ weakly-supervised learning ] [ weakly supervised representation learning ] [ Weak supervision ] [ Weak-supervision ] [ webly-supervised learning ] [ weight attack ] [ weight balance ] [ Weight quantization ] [ weight-sharing ] [ wide local minima ] [ Wigner-Eckart Theorem ] [ winning tickets ] [ wireframe model ] [ word-learning ] [ world models ] [ World Models ] [ worst-case generalisation ] [ xai ] [ XAI ] [ zero-order optimization ] [ zero-shot learning ] [ Zero-shot learning ] [ Zero-shot Learning ] [ Zero-shot synthesis ]

182 Results

Poster
Mon 1:00 Implicit Normalizing Flows
Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu
Poster
Mon 1:00 Tomographic Auto-Encoder: Unsupervised Bayesian Recovery of Corrupted Data
Francesco Tonolini, Pablo Garcia Moreno, Andreas Damianou, Roderick Murray-Smith
Poster
Mon 1:00 Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
Linfeng Zhang, Kaisheng Ma
Poster
Mon 1:00 Wasserstein Embedding for Graph Learning
Soheil Kolouri, Navid Naderializadeh, Gustavo K Rohde, Heiko Hoffmann
Poster
Mon 1:00 The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods
Louis THIRY, Michael Arbel, Eugene Belilovsky, Edouard Oyallon
Poster
Mon 1:00 Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs
Cheng Wang, Carolin Lawrence, Mathias Niepert
Poster
Mon 1:00 Trusted Multi-View Classification
Zongbo Han, Changqing Zhang, Huazhu FU, Joey T Zhou
Poster
Mon 1:00 Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
Zhiqiang Shen, Zhiqiang Shen, Dejia Xu, Zitian Chen, Kwang-Ting Cheng, Marios Savvides
Poster
Mon 1:00 SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
A F M Shahab Uddin, Mst. Sirazam Monira, Wheemyung Shin, TaeChoong Chung, Sung-Ho Bae
Poster
Mon 1:00 Training with Quantization Noise for Extreme Model Compression
Pierre Stock, Angela Fan, Benjamin Graham, Edouard Grave, Rémi Gribonval, Hervé Jégou, Armand Joulin
Poster
Mon 1:00 WaNet - Imperceptible Warping-based Backdoor Attack
Tuan Anh Nguyen, Anh T Tran
Poster
Mon 1:00 Domain Generalization with MixStyle
Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang
Poster
Mon 1:00 Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Dong Bok Lee, Dongchan Min, Seanie Lee, Sung Ju Hwang
Poster
Mon 1:00 On the Transfer of Disentangled Representations in Realistic Settings
Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wuthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Schoelkopf
Poster
Mon 1:00 LEAF: A Learnable Frontend for Audio Classification
Neil Zeghidour, Olivier Teboul, Félix de Chaumont Quitry, Marco Tagliasacchi
Oral
Mon 3:15 Free Lunch for Few-shot Learning: Distribution Calibration
Shuo Yang, Lu Liu, Min Xu
Poster
Mon 9:00 Gradient Projection Memory for Continual Learning
Gobinda Saha, Isha Garg, Kaushik Roy
Poster
Mon 9:00 Single-Photon Image Classification
Thomas Fischbacher, Luciano Sbaiz
Poster
Mon 9:00 What Can You Learn From Your Muscles? Learning Visual Representation from Human Interactions
Kiana Ehsani, Daniel Gordon, Thomas H Nguyen, Roozbeh Mottaghi, Ali Farhadi
Poster
Mon 9:00 Learning Hyperbolic Representations of Topological Features
Panagiotis Kyriakis, Iordanis Fostiropoulos, Paul Bogdan
Poster
Mon 9:00 Predicting Classification Accuracy When Adding New Unobserved Classes
Yuli Slavutsky, Yuval Benjamini
Poster
Mon 9:00 WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic
Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-yeh Chiang, Christoph Studer, Tom Goldstein
Poster
Mon 9:00 Understanding the failure modes of out-of-distribution generalization
Vaishnavh Nagarajan, Anders J Andreassen, Behnam Neyshabur
Poster
Mon 9:00 Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections
Csaba Toth, Patric Bonnier, Harald Oberhauser
Poster
Mon 9:00 Overparameterisation and worst-case generalisation: friend or foe?
Aditya Krishna Menon, Ankit Singh Rawat, Sanjiv Kumar
Poster
Mon 9:00 Multi-Time Attention Networks for Irregularly Sampled Time Series
Satya Narayan Shukla, Benjamin M Marlin
Poster
Mon 9:00 Disentangling 3D Prototypical Networks for Few-Shot Concept Learning
Mihir Prabhudesai, Shamit Lal, Darshan Patil, Hsiao-Yu Tung, Adam Harley, Katerina Fragkiadaki
Poster
Mon 9:00 The Risks of Invariant Risk Minimization
Elan Rosenfeld, Pradeep K Ravikumar, Andrej Risteski
Poster
Mon 9:00 What Should Not Be Contrastive in Contrastive Learning
Tete Xiao, Xiaolong Wang, Alyosha Efros, trevor darrell
Poster
Mon 9:00 LambdaNetworks: Modeling long-range Interactions without Attention
Irwan Bello
Poster
Mon 9:00 Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures
Pedro Hermosilla Casajus, Marco Schäfer, Matej Lang, Gloria Fackelmann, Pere-Pau Vázquez, Barbora Kozlikova, Michael Krone, Tobias Ritschel, Timo Ropinski
Poster
Mon 9:00 A statistical theory of cold posteriors in deep neural networks
Laurence Aitchison
Poster
Mon 9:00 PAC Confidence Predictions for Deep Neural Network Classifiers
Sangdon Park, Shuo Li, Insup Lee, Osbert Bastani
Poster
Mon 9:00 Parameter Efficient Multimodal Transformers for Video Representation Learning
Sangho Lee, Youngjae Yu, Gunhee Kim, Thomas Breuel, Jan Kautz, Yale Song
Poster
Mon 9:00 Uncertainty Sets for Image Classifiers using Conformal Prediction
Anastasios Angelopoulos, Stephen Bates, Michael Jordan, Jitendra Malik
Poster
Mon 9:00 Representation learning for improved interpretability and classification accuracy of clinical factors from EEG
Garrett Honke, Irina Higgins, Nina Thigpen, Vladimir Miskovic, Katie Link, Sunny Duan, Pramod Gupta, Julia Klawohn, Greg Hajcak
Poster
Mon 9:00 Structured Prediction as Translation between Augmented Natural Languages
Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, Stefano Soatto
Poster
Mon 9:00 Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models
Siavash Khodadadeh, Sharare Zehtabian, Saeed Vahidian, Weijia Wang, Bill Lin, Ladislau Boloni
Poster
Mon 9:00 LiftPool: Bidirectional ConvNet Pooling
Jiaojiao Zhao, Cees G Snoek
Oral
Mon 11:15 Gradient Projection Memory for Continual Learning
Gobinda Saha, Isha Garg, Kaushik Roy
Oral
Mon 11:30 Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan, Pedro Savarese, Michael Maire
Spotlight
Mon 12:15 On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
Kenji Kawaguchi
Spotlight
Mon 13:20 Uncertainty Sets for Image Classifiers using Conformal Prediction
Anastasios Angelopoulos, Stephen Bates, Michael Jordan, Jitendra Malik
Poster
Mon 17:00 PseudoSeg: Designing Pseudo Labels for Semantic Segmentation
Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang, Tomas Pfister
Poster
Mon 17:00 Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
Akinori Ebihara, Taiki Miyagawa, Kazuyuki Sakurai, Hitoshi Imaoka
Poster
Mon 17:00 Random Feature Attention
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, Lingpeng Kong
Poster
Mon 17:00 Semi-supervised Keypoint Localization
Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Poster
Mon 17:00 Layer-adaptive Sparsity for the Magnitude-based Pruning
Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Poster
Mon 17:00 Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients
Jing An, Lexing Ying, Yuhua Zhu
Poster
Mon 17:00 The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran, Behnam Neyshabur, Hanie Sedghi
Poster
Mon 17:00 SOLAR: Sparse Orthogonal Learned and Random Embeddings
Tharun Medini Medini, Beidi Chen, Anshumali Shrivastava
Poster
Mon 17:00 Selective Classification Can Magnify Disparities Across Groups
Erik Jones, Shiori Sagawa, Pang Wei Koh, Ananya Kumar, Percy Liang
Poster
Mon 17:00 Model Patching: Closing the Subgroup Performance Gap with Data Augmentation
Karan Goel, Albert Gu, Yixuan Li, Christopher Re
Poster
Mon 17:00 Explaining the Efficacy of Counterfactually Augmented Data
Divyansh Kaushik, Amrith Setlur, Eduard H Hovy, Zachary Lipton
Poster
Mon 17:00 MoPro: Webly Supervised Learning with Momentum Prototypes
Junnan Li, Caiming Xiong, Steven Hoi
Poster
Mon 17:00 Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks
Alexander Levine, Soheil Feizi
Spotlight
Mon 19:45 Structured Prediction as Translation between Augmented Natural Languages
Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, Stefano Soatto
Spotlight
Mon 21:56 Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Dong Bok Lee, Dongchan Min, Seanie Lee, Sung Ju Hwang
Poster
Tue 1:00 Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition
Seon-Ho Lee, Chang-Su Kim
Poster
Tue 1:00 Accurate Learning of Graph Representations with Graph Multiset Pooling
Jinheon Baek, Minki Kang, Sung Ju Hwang
Poster
Tue 1:00 Contemplating Real-World Object Classification
Ali Borji
Poster
Tue 1:00 Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples
Ziang Yan, Yiwen Guo, Jian Liang, Changshui Zhang
Poster
Tue 1:00 A Universal Representation Transformer Layer for Few-Shot Image Classification
Lu Liu, Will Hamilton, Guodong Long, Jing Jiang, Hugo Larochelle
Poster
Tue 1:00 Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing
Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad
Poster
Tue 1:00 Learning the Pareto Front with Hypernetworks
Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik
Poster
Tue 1:00 Calibration tests beyond classification
David Widmann, Fredrik Lindsten, Dave Zachariah
Poster
Tue 1:00 Lossless Compression of Structured Convolutional Models via Lifting
Gustav Sourek, Filip Zelezny, Ondrej Kuzelka
Poster
Tue 1:00 On Self-Supervised Image Representations for GAN Evaluation
Stanislav Morozov, Andrey Voynov, Artem Babenko
Poster
Tue 9:00 Learning Parametrised Graph Shift Operators
George Dasoulas, Johannes Lutzeyer, Michalis Vazirgiannis
Poster
Tue 9:00 Representation Learning via Invariant Causal Mechanisms
Jovana Mitrovic, Brian McWilliams, Jacob C Walker, Lars Buesing, Charles Blundell
Poster
Tue 9:00 Uncertainty-aware Active Learning for Optimal Bayesian Classifier
Guang Zhao, Edward Dougherty, Byung-Jun Yoon, Francis Alexander, Xiaoning Qian
Poster
Tue 9:00 On the Dynamics of Training Attention Models
Haoye Lu, Yongyi Mao, Amiya Nayak
Poster
Tue 9:00 On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
Kenji Kawaguchi
Poster
Tue 9:00 Tent: Fully Test-Time Adaptation by Entropy Minimization
Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, trevor darrell
Poster
Tue 9:00 Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
Beliz Gunel, Jingfei Du, Alexis Conneau, Veselin Stoyanov
Poster
Tue 9:00 The geometry of integration in text classification RNNs
Kyle Aitken, Vinay Ramasesh, Ankush Garg, Yuan Cao, David Sussillo, Niru Maheswaranathan
Poster
Tue 9:00 Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding
Sana Tonekaboni, Danny Eytan, Anna Goldenberg
Poster
Tue 9:00 Shape or Texture: Understanding Discriminative Features in CNNs
Md Amirul Islam, Matthew Kowal, Patrick Esser, Sen Jia, Björn Ommer, Kosta Derpanis, Neil Bruce
Spotlight
Tue 13:38 Long-tail learning via logit adjustment
Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar
Poster
Tue 17:00 A Discriminative Gaussian Mixture Model with Sparsity
Hideaki Hayashi, Seiichi Uchida
Poster
Tue 17:00 Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Zhiyuan Li, Yi Zhang, Sanjeev Arora
Poster
Tue 17:00 A unifying view on implicit bias in training linear neural networks
Chulhee (Charlie) Yun, Shankar Krishnan, Hossein Mobahi
Poster
Tue 17:00 Usable Information and Evolution of Optimal Representations During Training
Michael Kleinman, Alessandro Achille, Daksh Idnani, Jonathan Kao
Poster
Tue 17:00 Can a Fruit Fly Learn Word Embeddings?
Yuchen Liang, Chaitanya Ryali, Ben Hoover, Leopold Grinberg, Saket Navlakha, Mohammed J Zaki, Dmitry Krotov
Poster
Tue 17:00 Monotonic Kronecker-Factored Lattice
William Bakst, Nobuyuki Morioka, Erez Louidor
Poster
Tue 17:00 Contextual Dropout: An Efficient Sample-Dependent Dropout Module
XINJIE FAN, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, Mingyuan Zhou
Poster
Tue 17:00 Concept Learners for Few-Shot Learning
Kaidi Cao, Maria Brbic, Jure Leskovec
Poster
Wed 1:00 Explainable Deep One-Class Classification
Philipp Liznerski, Lukas Ruff, Robert A Vandermeulen, Billy J Franks, Marius Kloft, Klaus R Muller
Poster
Wed 1:00 Knowledge distillation via softmax regression representation learning
Jing Yang, Brais Martinez, Adrian Bulat, Georgios Tzimiropoulos
Poster
Wed 1:00 Negative Data Augmentation
Abhishek Sinha, Kumar Ayush, Jiaming Song, Burak Uzkent, Hongxia Jin, Stefano Ermon
Poster
Wed 1:00 BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction
Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, fengwei yu, Wei Wang, Shi Gu
Poster
Wed 1:00 DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Alexandre Rame, MATTHIEU CORD
Poster
Wed 1:00 Auxiliary Task Update Decomposition: The Good, the Bad and the Neutral
Lucio Dery, Yann Dauphin, David Grangier
Poster
Wed 1:00 Separation and Concentration in Deep Networks
John Zarka, Florentin Guth, Stéphane Mallat
Poster
Wed 1:00 Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas, Yuxuan Zhang, Florian Kerschbaum
Poster
Wed 1:00 Active Contrastive Learning of Audio-Visual Video Representations
Shuang Ma, Zhaoyang Zeng, Daniel McDuff, Yale Song
Poster
Wed 1:00 High-Capacity Expert Binary Networks
Adrian Bulat, Brais Martinez, Georgios Tzimiropoulos
Poster
Wed 1:00 Differentiable Segmentation of Sequences
Erik Scharwächter, Jonathan Lennartz, Emmanuel Müller
Poster
Wed 1:00 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby
Poster
Wed 1:00 Self-supervised Adversarial Robustness for the Low-label, High-data Regime
Sven Gowal, Po-Sen Huang, Aaron v den, Timothy A Mann, Pushmeet Kohli
Poster
Wed 1:00 No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks
Shyamgopal Karthik, Ameya Prabhu, Puneet Dokania, Vineet Gandhi
Poster
Wed 1:00 Graph Edit Networks
Benjamin Paassen, Daniele Grattarola, Daniele Zambon, Cesare Alippi, Barbara E Hammer
Poster
Wed 1:00 Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Mingyang Yi, LU HOU, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma
Poster
Wed 1:00 A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong, Yigitcan Kaya, Ionut-Vlad Modoranu, Tudor Dumitras
Poster
Wed 1:00 Simple Spectral Graph Convolution
Hao Zhu, Piotr Koniusz
Oral
Wed 3:00 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby
Spotlight
Wed 4:40 Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas, Yuxuan Zhang, Florian Kerschbaum
Spotlight
Wed 5:25 Tent: Fully Test-Time Adaptation by Entropy Minimization
Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, trevor darrell
Spotlight
Wed 5:45 Implicit Normalizing Flows
Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu
Poster
Wed 9:00 Growing Efficient Deep Networks by Structured Continuous Sparsification
Xin Yuan, Pedro Savarese, Michael Maire
Poster
Wed 9:00 For self-supervised learning, Rationality implies generalization, provably
Yamini Bansal, Gal Kaplun, Boaz Barak
Poster
Wed 9:00 Evaluation of Neural Architectures Trained With Square Loss vs Cross-Entropy in Classification Tasks
Like Hui, Misha Belkin
Poster
Wed 9:00 Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
Poster
Wed 9:00 Graph Information Bottleneck for Subgraph Recognition
Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, Ran He
Poster
Wed 9:00 Provably robust classification of adversarial examples with detection
Fatemeh Sheikholeslami, Ali Lotfi, Zico Kolter
Poster
Wed 9:00 Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping, Liam H Fowl, Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein
Poster
Wed 9:00 Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit
Ben Adlam, Jaehoon Lee, Lechao Xiao, Jeffrey Pennington, Jasper Snoek
Poster
Wed 9:00 Unbiased Teacher for Semi-Supervised Object Detection
Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, Peter Vajda
Poster
Wed 9:00 Long-tail learning via logit adjustment
Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar
Spotlight
Wed 12:38 Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
Akinori Ebihara, Taiki Miyagawa, Kazuyuki Sakurai, Hitoshi Imaoka
Spotlight
Wed 12:48 LambdaNetworks: Modeling long-range Interactions without Attention
Irwan Bello
Poster
Wed 17:00 Protecting DNNs from Theft using an Ensemble of Diverse Models
Sanjay Kariyappa, Atul Prakash, Moinuddin K Qureshi
Poster
Wed 17:00 Efficient Conformal Prediction via Cascaded Inference with Expanded Admission
Adam Fisch, Tal Schuster, Tommi Jaakkola, Regina Barzilay
Poster
Wed 17:00 Learning and Evaluating Representations for Deep One-Class Classification
Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, Tomas Pfister
Poster
Wed 17:00 Beyond Categorical Label Representations for Image Classification
Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson
Poster
Wed 17:00 Learning with Feature-Dependent Label Noise: A Progressive Approach
Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, Chao Chen
Poster
Wed 17:00 Adaptive Universal Generalized PageRank Graph Neural Network
Eli Chien, Jianhao Peng, Pan Li, Olgica Milenkovic
Poster
Wed 17:00 BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig, Ali Madani, Lav R Varshney, Caiming Xiong, Richard Socher, Nazneen Rajani
Spotlight
Wed 20:50 CPT: Efficient Deep Neural Network Training via Cyclic Precision
Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
Poster
Thu 1:00 Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning
Namyeong Kwon, Hwidong Na, Gabriel Huang, Simon Lacoste-Julien
Poster
Thu 1:00 Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search
Peidong Liu, Gengwei Zhang, Bochao Wang, Hang Xu, Xiaodan Liang, Yong Jiang, Zhenguo Li
Poster
Thu 1:00 Incremental few-shot learning via vector quantization in deep embedded space
Kuilin Chen, Chi-Guhn Lee
Poster
Thu 1:00 Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks
Jan Schuchardt, Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann
Poster
Thu 1:00 AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights
Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha
Poster
Thu 1:00 Hopfield Networks is All You Need
Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Lukas Gruber, Markus Holzleitner, Thomas Adler, David Kreil, Michael K Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter
Poster
Thu 1:00 Counterfactual Generative Networks
Axel Sauer, Andreas Geiger
Poster
Thu 1:00 Free Lunch for Few-shot Learning: Distribution Calibration
Shuo Yang, Lu Liu, Min Xu
Spotlight
Thu 4:55 On Self-Supervised Image Representations for GAN Evaluation
Stanislav Morozov, Andrey Voynov, Artem Babenko
Poster
Thu 9:00 Multi-Class Uncertainty Calibration via Mutual Information Maximization-based Binning
Kanil Patel, William H Beluch, Bin Yang, Michael Pfeiffer, Dan Zhang
Poster
Thu 9:00 Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification
Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, Michael W Mahoney
Poster
Thu 9:00 Uncertainty in Gradient Boosting via Ensembles
Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko
Poster
Thu 9:00 On Position Embeddings in BERT
Wang Benyou, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Simonsen
Poster
Thu 9:00 Dataset Meta-Learning from Kernel Ridge-Regression
Timothy Nguyen, Zhourong Chen, Jaehoon Lee
Poster
Thu 9:00 A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora
Poster
Thu 9:00 Bayesian Few-Shot Classification with One-vs-Each Pólya-Gamma Augmented Gaussian Processes
Jake Snell, Richard Zemel
Poster
Thu 9:00 EEC: Learning to Encode and Regenerate Images for Continual Learning
Ali Ayub, Alan Wagner
Poster
Thu 9:00 C-Learning: Learning to Achieve Goals via Recursive Classification
Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine
Poster
Thu 9:00 Deep Networks and the Multiple Manifold Problem
Sam Buchanan, Dar Gilboa, John Wright
Oral
Thu 11:45 Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Zhiyuan Li, Yi Zhang, Sanjeev Arora
Spotlight
Thu 13:30 A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong, Yigitcan Kaya, Ionut-Vlad Modoranu, Tudor Dumitras
Expo Talk Panel
Thu 14:00 AI Model Efficiency Toolkit talk & demo
Abhi Khobare
Poster
Thu 17:00 CT-Net: Channel Tensorization Network for Video Classification
Kunchang Li, xianhang li, Yali Wang, Jun Wang, Yu Qiao
Poster
Thu 17:00 Extreme Memorization via Scale of Initialization
Harsh Mehta, Ashok Cutkosky, Behnam Neyshabur
Poster
Thu 17:00 CPT: Efficient Deep Neural Network Training via Cyclic Precision
Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin
Poster
Thu 17:00 Combining Label Propagation and Simple Models out-performs Graph Neural Networks
Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin Benson
Poster
Thu 17:00 In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, Mubarak Shah
Poster
Thu 17:00 CO2: Consistent Contrast for Unsupervised Visual Representation Learning
Chen Wei, Huiyu Wang, Wei Shen, Alan Yuille
Poster
Thu 17:00 No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Will Grathwohl, Jacob Kelly, Milad Hashemi, Mohammad Norouzi, Kevin Swersky, David Duvenaud
Poster
Thu 17:00 Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma
Poster
Thu 17:00 Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models
Mitch Hill, Jonathan Mitchell, Song-Chun Zhu
Poster
Thu 17:00 Calibration of Neural Networks using Splines
Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, Richard Hartley
Poster
Thu 17:00 Prototypical Representation Learning for Relation Extraction
Ning Ding, Xiaobin Wang, Yao Fu, Guangwei Xu, Rui Wang, Pengjun Xie, Ying Shen, Fei Huang, Hai-Tao Zheng, Rui Zhang
Poster
Thu 17:00 How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen, Yuan Cao, Difan Zou, Quanquan Gu
Spotlight
Thu 20:15 Random Feature Attention
Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, Lingpeng Kong
Spotlight
Thu 20:25 Learning with Feature-Dependent Label Noise: A Progressive Approach
Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, Chao Chen
Workshop
Fri 3:25 Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Workshop
Fri 4:45 Oral 1: Yann Dubois et al., Lossy Compression for Lossless Prediction
Taco Cohen
Workshop
Fri 5:10 Hugo Larochelle, Google Brain Montréal, Adjunct Professor at Université de Montréal and a Canada CIFAR Chair
Hugo Larochelle
Workshop
Fri 6:10 Voice2Series: Reprogramming Acoustic Models for Time Series Classification
Huck Yang
Workshop
Fri 6:26 Submodular Mutual Information for Targeted Data Subset Selection
Suraj Kothawade
Workshop
Fri 6:30 Break & Poster session 1
Workshop
Fri 6:34 Min-Entropy Sampling Might Lead to Better Generalization in Deep Text Classification, Nimrah Shakeel
Nimrah Shakeel
Workshop
Fri 8:15 Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN
Dipam Paul
Workshop
Fri 8:25 Invited Speaker Marine Carpuat - Weak Supervision for Cross-Lingual Semantic Analysis
Marine Carpuat
Workshop
Fri 9:30 Break & Poster session 2
Workshop
Fri 11:32 Leveraging Unlabelled Data through Semi-supervised Learning to Improve the Performance of a Marine Mammal Classification System
Mark Thomas
Workshop
Fri 11:36 Continuous Weight Balancing
Daniel J Wu
Workshop
Fri 11:40 Spotlight 8: Yunhao Ge, Graph Autoencoder for Graph Compression and Representation Learning
Workshop
Fri 11:44 Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN
Dipam Paul
Workshop
Fri 11:48 Towards Robustness to Label Noise in Text Classification via Noise Modeling
Siddhant Garg
Workshop
PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN
Daniele Romanini, Adam Hall, Pavlos Papadopoulos, Tom Titcombe, Abbas Ismail, Tudor Cebere, Robert Sandmann, Robin Roehm, Michael Hoeh
Workshop
Heterogeneous Zero-Shot Federated Learning with New Classes for Audio Classification
Gautham Krishna Gudur, Satheesh Perepu