BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//iisc-demo v1//NONSGML iCal Writer//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTEND:20191213T120000Z
UID:52b07ba3e5555d7e1437088421a839bf-9
DTSTAMP:19700101T120016Z
DESCRIPTION:Cryptography..In Anticipation of Quantum Computer
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/9/cryptography-in-anticipation-of-quantum-computer/
SUMMARY:Does the quantum supremacy has any impact on cryptographic security as practiced today? Will we be prepared enough in case quantum computer becomes a reality in a not-so-distant future?
Painting with a broad brush, the talk will attempt to draw the contours of cryptography in a post-quantum world. We will touch upon the challenges and difficulties of securing our various mundane activities that involve communication over a public channel and the kind of research that have been initiated as part of this broad agenda.
DTSTART:20191213T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191129T120000Z
UID:8b721dabd1acfcede4103b829bc7c8a6-10
DTSTAMP:19700101T120016Z
DESCRIPTION:An Algorithmic View of Smart Living: The Next Frontier
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/10/an-algorithmic-view-of-smart-living-the-next-frontier/
SUMMARY:The advent of wireless sensor networks and Internet of Things (IoTs) are making our lives increasingly dependent on a wide variety of cyber-physical systems (CPS) and smart services (e.g., smart energy, transportation, healthcare, agriculture, etc.), while aiming to improve quality of life. The availability of rich mobile devices like smartphones are also empowering humans to act as sensors to collect fine-grained information via crowd sensing, resulting in actionable inferences and decisions. This talk will first provide a â€œsmart livingâ€ vision and highlight the underlying research challenges. Next, it will formulate fundamental problems related to sensor data fusion, coverage and connectivity, mobile charging, security, network lifetime and resource trade-off. Novel solutions will be designed based on randomized and approximation algorithms on graphs, optimization techniques, geometric probability theory, uncertainty reasoning, trust model and game theory. Case studies and experimental results will be presented where possible. The talk will be concluded with emerging applications of sensors and IoTs, followed by directions of future research.
DTSTART:20191129T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191206T120000Z
UID:52f7704c5ecca413b2bb1b4007fe8e9e-12
DTSTAMP:19700101T120011Z
DESCRIPTION:Cyber(-Physical) Systems and Machine Learning: Research Challenges
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/12/cyber-physical-systems-and-machine-learning-research-challenges/
SUMMARY:The rapid digital transformation of our world, the exponential improvements in computing, communication, and storage, and the evolution of machine learning are transforming the very nature of scientific discovery and engineering research and making advances in many areas including personalized health care, emergency response, traffic flow management, and electric power generation. These developments have resulted in new complex applications and workflows that require sophisticated cyber and cyber-physical systems that deeply integrate computation, communication, and control into physical systems. After a brief overview of machine learning, I will describe the characteristics of the new applications and the architectures of the required cyber(-physical) systems. The focus will be on the current research challenges in programming, debugging, and managing these systems safely and securely, utilizing machine learning technologies.
DTSTART:20191206T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191127T120000Z
UID:ff7db712605f142f2c82210b4c2361c0-14
DTSTAMP:19700101T120004Z
DESCRIPTION:Improved Truthful Mechanisms for Combinatorial Auctions with Submodular Bidders
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/14/improved-truthful-mechanisms-for-combinatorial-auctions-with-submodular-bidders/
SUMMARY:A longstanding open problem in Algorithmic Mechanism Design is to design computationally eï¬ƒcient truthful mechanisms for (approximately) maximizing welfare in combinatorial auctions with submodular bidders. The ï¬rst such mechanism was obtained by Dobzinski, Nisan, and Schapira [STOCâ€™06] who gave an O(log^2 m)-approximation where m is number of items. This problem has been studied extensively since, culminating in an O(sqrt{log m})-approximation mechanism by Dobzinski [STOCâ€™16]. We present a computationally-eï¬ƒcient truthful mechanism with approximation ratio that improves upon the state-of-the-art by an exponential factor. In particular, our mechanism achieves an O((loglog m)^3)-approximation in expectation, uses only O(n) demand queries, and has universal truthfulness guarantee. This settles an open question of Dobzinski on whether Î˜(sqrt{log m}) is the best approximation ratio in this setting in negative.
This is based on a joint work with Sepehr Assadi and appeared in FOCS 2019.
DTSTART:20191127T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191127T120000Z
UID:ec2a1a57975ac9a75d66e9322322278b-15
DTSTAMP:19700101T120004Z
DESCRIPTION:Beyond trace reconstruction: Population recovery from the deletion channel
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/15/beyond-trace-reconstruction-population-recovery-from-the-deletion-channel/
SUMMARY:Population recovery is the problem of learning an unknown distribution over an unknown set of n-bit strings, given access to independent draws from the distribution that have been independently corrupted according to some noise channel. Recent work has intensively studied such problems both for the bit-flip and erasure noise channels. We initiate the study of population recovery under the deletion channel, in which each bit is independently deleted with some fixed probability and the surviving bits are concatenated and transmitted, in both worst-case and average-case settings of the strings in the support. This is a generalization of trace reconstruction, a challenging problem that has received much recent attention.
For the worst case, we show that for any s = o(log n / log log n), a population of s strings from {0,1}^n can be learned under deletion channel noise using exp(n^{1/2+o(1)}) samples. On the lower bound side, we show that n^{Omega(s)} samples are required to perform population recovery under the deletion channel, for all s <= n^0.49.
For the average case, we give an efficient algorithm for population recovery. The algorithm runs in time poly(n,s,1/eps) and its sample complexity is poly(s, 1/eps, exp(log^{1/3} n)), where eps is the TV distance between the original and output distributions.
This is based on the following joint work with Frank Ban, Xi Chen, Adam Freilich and Rocco Servedio:
https://arxiv.org/abs/1904.05532
https://arxiv.org/abs/1907.05964
DTSTART:20191127T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191206T120000Z
UID:5ce08ca02226e7d89b60b890f0836e89-16
DTSTAMP:19700101T120015Z
DESCRIPTION:Geometric Search Techniques for Provably Robust Query Processing
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/16/geometric-search-techniques-for-provably-robust-query-processing/
SUMMARY:Relational Database Management Systems (RDBMS) constitute the backbone of today\\\'s information-rich society, providing a congenial environment for handling enterprise data during its entire life cycle of generation, storage, maintenance and processing. The Structured Query Language (SQL) is the de facto standard interface to query the information present in RDBMS based repositories. An extremely attractive feature of SQL is that it is declarative in nature, meaning that the user species only the end objectives, leaving to the system the task of identifying the optimal execution strategy to achieve these objectives.
A crucial input to generating efficient query execution strategies, called \\
DTSTART:20191206T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191206T120000Z
UID:c399be4e930ff2881638abf81b7d541d-17
DTSTAMP:19700101T120016Z
DESCRIPTION:A Graph-Theorist's Perspective on the Quest for Dichotomy
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/17/a-graph-theorists-perspective-on-the-quest-for-dichotomy/
SUMMARY:The celebrated Feder-Vardi Dichotomy Conjecture for Constraint Satisfaction has
recently been established by Bulatov and by Zhuk. Because of the profound impact
the conjecture had on theoretical computer science, Feder and Vardi were jointly
awarded the Alonzo Church Award for 2019. The solution involved a beautiful blend
of graph theory, logic, and universal algebra, developed over a period of 25 years;
these developments, in turn, impacted all these fields.
I will present a personal account of how a graph-theorist perceived the events and
developments leading up to the formulation, and the solution, of the conjecture. I
will focus on the impact on graph theory, but briefly mention the other fields as well.
DTSTART:20191206T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191219T120000Z
UID:ba6fd012897363024456839946a537eb-18
DTSTAMP:19700101T120011Z
DESCRIPTION:Discover[i]: Component-based Parameterized Reasoning for Distributed Applications
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/18/discoveri-component-based-parameterized-reasoning-for-distributed-applications/
SUMMARY:This talk begins with an overview of recent and ongoing work in Purdueâ€™s Formal Methods (PurForM) research group. Next, the talk presents our Discover[i] project which seeks to automate reasoning about new classes of distributed applications built on top of verified components. The current focus of this project is on parameterized verification and synthesis of systems that use consensus protocols, such as Paxos, as a building block to provide higher-level functionality. The talk explains the key ingredients of our framework: (1) an abstraction of consensus with a simple atomic primitive, (2) a decidability result and algorithm for parameterized verification of safety properties of systems with such consensus primitives, and (3) an algorithm for parameterized synthesis of coordination for such systems.
Discover[i] is joint work with Nouraldin Jaber, Christopher Wagner, Swen Schewe and Milind Kulkarni.
DTSTART:20191219T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191211T120000Z
UID:dc872eef7653bdbd599819ec63da7ddd-19
DTSTAMP:19700101T120010Z
DESCRIPTION:Optimistic Hybrid Analysis for System Security and Reliability
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/19/optimistic-hybrid-analysis-for-system-security-and-reliability/
SUMMARY:Dynamic analysis tools such as information-flow tracking (DIFT) and data-race detection are useful for enforcing security policies and improving software reliability. But these tools are rarely used in production systems, as it can slow down a program by an order of magnitude. Static whole program analyses can be used to prove safe execution states and elide unnecessary runtime checks, but in practice, they are mostly ineffective for large programs. The reason is that they are greatly hindered by the need to prove their soundness, as soundness requires analysis of all possible executions and sound over-approximations of a program.
This talk presents Optimistic Hybrid Analysis (OHA). OHA improves the scalability and precision of whole program static analysis by one to two orders of magnitude by making optimistic assumptions about a programâ€™s properties that are almost always true, but are hard to prove statically. By making these assumptions, we sacrifice soundness of static analysis, but yet, we preserve soundness of dynamic analysis by checking them at runtime and recovering when they fail.
OHA has been used to obtain three promising results. It speeds up FastTrack, a well-known dynamic data-race detector by 3.5x; reduces the overhead of DIFT to less than 10%, a 4.4x improvement; enables the first known solution for a sound garbage collector for C/C++ using efficient pointer provenance.
DTSTART:20191211T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191210T120000Z
UID:40e7d9e23862f222a19ebee978adab59-20
DTSTAMP:19700101T120011Z
DESCRIPTION:CHET: An Optimizing Compiler for Fully-Homomorphic Neural-Network Inferencing
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/20/chet-an-optimizing-compiler-for-fully-homomorphic-neural-network-inferencing/
SUMMARY:Fully Homomorphic Encryption (FHE) refers to a set of encryption schemes that allow computations on encrypted data without requiring a secret key. Recent cryptographic advances have pushed FHE into the realm of practical applications. However, programming these applications remains a huge challenge, as it requires cryptographic domain expertise to ensure correctness, security, and performance.
CHET is a domain-specific optimizing compiler designed to make the task of programming FHE applications easier. Motivated by the need to perform neural network inference on encrypted medical and financial data, CHET supports a domain-specific language for specifying tensor circuits. It automates many of the laborious and error prone tasks of encoding such circuits homomorphically, including encryption parameter selection to guarantee security and accuracy of the computation, determining efficient tensor layouts, and performing scheme-specific optimizations.
Our evaluation on a collection of popular neural networks shows that CHET generates homomorphic circuits that outperform expert-tuned circuits and makes it easy to switch across different encryption schemes. We demonstrate its scalability by evaluating it on a version of SqueezeNet, which to the best of our knowledge, is the deepest neural network to be evaluated homomorphically.
DTSTART:20191210T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191212T120000Z
UID:2c3326b70f7b1ac0597d42e0107c3ba4-22
DTSTAMP:19700101T120015Z
DESCRIPTION:Achieving Fairness in the Stochastic Multi-armed Bandit Problem
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/22/achieving-fairness-in-the-stochastic-multi-armed-bandit-problem/
SUMMARY:The classical Stochastic Multi-armed Bandit (MAB) problem provides an abstraction for many real-world decision making problems such as sponsored-search auctions, crowd-sourcing, wireless communication, etc. In this work, we study Fair-MAB, a variant of the MAB problem, where, in addition to the goal of maximizing the sum of expected rewards, the algorithm also has to ensure that each arm is pulled for at least a given fraction of the total number of rounds which imposes an additional fairness constraint on the algorithm. The non-trivial aspect of Fair-MAB arises when the time horizon T is unknown to the algorithm. The treatment of fairness in the MAB problem is either procedural or outcome-based. Procedural notions of fairness require that the decision-making process of the algorithm must satisfy some fairness guarantees. In contrast to this, we consider an outcome-based fairness notion which can be validated based on the outcome of the decisions of the algorithm.
Our primary contribution is characterizing a class of Fair-MAB algorithms by two parameters: the unfairness tolerance and the learning algorithm used as a black-box. We define an appropriate notion of fairness and show that our algorithms guarantee fairness independent of the choice of the learning algorithm. We define the notion of fairness-aware regret which naturally extends the conventional notion of regret, and show that the fairness-aware regret of our algorithm matches in order the regret of the black-box learning algorithm in the absence of fairness constraints. Finally, we show via detailed simulations that our algorithm outperforms the best known algorithm for the Fair-MAB problem, both in terms of the regret and in terms of the fairness guarantee that it provides. We also evaluate the cost of fairness in the MAB setting in terms of the conventional notion of regret.
DTSTART:20191212T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191216T120000Z
UID:89a7a155a8589ecd186f51876097b03d-23
DTSTAMP:19700101T120011Z
DESCRIPTION:Lessons Developing Conversational AI Interfaces
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/23/lessons-developing-conversational-ai-interfaces/
SUMMARY:This talk will cover building conversational AI using deep learning technologies and lessons learnt in developing conversational interfaces. The first part of the talk will describe recent advances in deep learning that has led to tremendous progress in natural language processing and is making conversational AI a reality. Conversational AI includes intent classification, sequence labeling, understanding dialogs and context, and coming up with responses to users messages. The second part of the talk will address lessons learned developing conversational interfaces. A conversational interface needs to be personable in addressing, adaptive in understanding, and available with many different supported tasks. Overall, key take aways from the talk will be better understanding of (1) deep learning techniques for natural language processing and (2) interaction patterns for building a good conversational interface.
DTSTART:20191216T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191217T120000Z
UID:0aaa17a59e93254929ef8e6a41f7ac4e-25
DTSTAMP:19700101T120011Z
DESCRIPTION:Equivalence test for the trace iterated matrix multiplication polynomial
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/25/equivalence-test-for-the-trace-iterated-matrix-multiplication-polynomial/
SUMMARY:An m-variate polynomial f is affine equivalent to an n-variate polynomial g if m â‰¥ n and there is
a rank n matrix A âˆˆ FnÃ—m and b âˆˆ Fnsuch that f (x) = g(Ax + b). Given blackbox access to f
and g (i.e membership query access) the affine equivalence test problem is to determine whether f
is affine equivalent to g, and if yes then output a rank n matrix A âˆˆ FnÃ—m and b âˆˆ Fn such that
f (x) = g(Ax + b). This problem is at least as hard as graph isomorphism and algebra isomorphism
even when the coefficients of f and g are given explicitly (Agarwal and Saxena, STACS 2006), and
has been studied in literature by fixing g to be some interesting family of polynomials. In this work,
we fix g to be the trace of the product of d, w Ã— w symbolic matrices X1,...,Xd. We call this polynomial
Tr-IMMw,d. Kayal, Nair, Saha and Tavenas (CCC 2017) gave an efficient (i.e (mwd)
O(1) time)randomized algorithm for the affine equivalence test of the iterated matrix multiplication polynomial
IMMw,d, which is the (1,1)-th entry of the product of d wÃ—w symbolic matrices. Although the
definitions of Tr-IMMw,d and IMMw,d are closely related and their circuit complexities are very similar,
it is not clear whether an efficient affine equivalence test algorithm for IMMw,d implies the same
for Tr-IMMw,d. In this thesis, we take a step towards showing that equivalence test for Tr-IMMw,d
and IMMw,d have different complexity. We show that equivalence test for Tr-IMMw,d reduces in
randomized polynomial time to equivalence test for the determinant (DET), under mild conditions
on the underlying field. If the converse is also true then equivalence tests for Tr-IMMw,d and DET
are randomized polynomial time equivalent. It would then follow from the work of Gupta, Garg,
Kayal and Saha (ICALP 2019) that equivalence test for Tr-IMMw,d over Q is at least as hard as Integer
Factoring. This would then be in sharp contrast with the complexity of equivalence test for IMMw,d
over Q which can be solved efficiently in randomized polynomial time (by Kayal, Nair, Saha and
Tavenas (CCC 2017)).
Recent Update: Soon after the thesis is written, we (together with Vineet Nair) have succeeded
in showing the converse direction. So, the above conclusion is indeed true!
DTSTART:20191217T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191213T120000Z
UID:bd16f4561b80b91d8d3076c304402372-26
DTSTAMP:19700101T120011Z
DESCRIPTION:Pushing the Limits of Fairness in Collective Decision-Making
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/26/pushing-the-limits-of-fairness-in-collective-decision-making/
SUMMARY:In this talk, I will discuss fairness in collective decision-making, which includes everyday tasks such as heirs dividing an estate or residents voting over allocation of public budget. First, I will describe our recent work in fair allocation of private goods, where we strengthen individual notions of fairness to groupwise notions. I will also talk about ongoing projects in which we mix ex-ante and ex-post notions of fairness, and aim to achieve the best of both worlds. Finally, I will talk about generalization to allocation of public goods, and how various notions of fairness extend (or not) to this setting.
A promised take-away from the talk? Many fascinating open questions and insights into how resolving them can help the society (cue promotion of www.spliddit.org).
DTSTART:20191213T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20200103T120000Z
UID:e9037afc4b4cc722d79d70ec2ebb8cef-27
DTSTAMP:19700101T120016Z
DESCRIPTION:Ballooning Multi-Armed Bandits
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/27/ballooning-multi-armed-bandits/
SUMMARY:Many common web-based applications such as online q&A forums and online review portals need a scientific way of identifying high quality answers or opinions and distinguishing them from ordinary ones. To tackle this problem, we introduce a new model which we call \\
DTSTART:20200103T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191220T120000Z
UID:598d795caffabc22fe239e3625815e7c-28
DTSTAMP:19700101T120011Z
DESCRIPTION:Privacy Preserving Machine Learning via Multi-party Computation
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/28/privacy-preserving-machine-learning-via-multi-party-computation/
SUMMARY:Privacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momentum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings, FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary\\\'s behaviour). The state of the art ML frameworks such as ABY3 by Mohassel et.al (ACM CCS\\\'18) and SecureNN by Wagh et.al (PETS\\\'19) operate in the setting of 3 parties with one malicious corruption but achieve the weaker security guarantee of abort. We demonstrate PPML with real-time efficiency, using the following custom-made tools that overcome the limitations of the aforementioned state-of-the-art-- (a) dot product, which is independent of the vector size unlike the state-of-the-art ABY3, SecureNN and ASTRA by Chaudhari et.al (ACM CCSW\\\'19), all of which have linear dependence on the vector size. (b) Truncation and MSB Extraction, which are constant round and free of circuits like Parallel Prefix Adder (PPA) and Ripple Carry Adder (RCA), unlike ABY3 which uses these circuits and has round complexity of the order of depth of these circuits. We then exhibit the application of our FLASH framework in the secure server-aided prediction of vital algorithms: Linear Regression, Logistic Regression, Deep Neural Networks, and Binarized Neural Networks. We substantiate our theoretical claims through improvement in benchmarks of the aforementioned algorithms when compared with the current best framework ABY3. All the protocols are implemented over a 64-bit ring in LAN and WAN. Our experiments demonstrate that, for MNIST dataset, the improvement (in terms of throughput) ranges from 24x to 1390x over Local Area Network (LAN) and Wide Area Network (WAN) together.
DTSTART:20191220T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191219T120000Z
UID:15b908b3c1cc405192e9dad4af44b0e7-29
DTSTAMP:19700101T120011Z
DESCRIPTION:Memory compression for higher effective capacity and bandwidth
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/29/memory-compression-for-higher-effective-capacity-and-bandwidth/
SUMMARY:Many important client and data center applications need large memory capacity and high memory bandwidth to achieve their performance and energy efficiency goals. Hardware memory compression provides a promising direction to increase effective memory capacity and bandwidth without increasing system cost. This talk focuses on low overhead, hardware-centric compression mechanisms and compressed data management solutions for CPUs and GPUs.
DTSTART:20191219T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191220T120000Z
UID:2f7eb8ed0fead5c2570f8164970889ca-30
DTSTAMP:19700101T120011Z
DESCRIPTION:Language Agnostic Representation Learning for Product Shopping and Discovery
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/30/language-agnostic-representation-learning-for-product-shopping-and-discovery/
SUMMARY:Have you ever wondered how we serve high quality content at Amazon across different languages in product shopping and discovery? In this talk I will touch upon various scientific challenges involved in serving high quality content to our customers. Particularly, when customers are looking for specific products, we select and show items that customers would like to purchase using query-product classification models. Learning a separate model for each language (across different countries Amazon.com vs Amazon.in) is challenging as the amount of (hard) negatives available is sparse and hard to obtain. We solve this problem in a language-agnostic manner using additional structured relationships, such as query-query alignment and product-product alignment. I will discuss how this additional data along with transformer encoder based architecture with scaled self-attention can outperform several state-of-the-art benchmarks. This paper is accepted for WSDM 2020.
At the end, I will discuss various job opportunities and academic collaborations to work with us in our brand new division here at Bangalore.
DTSTART:20191220T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191227T120000Z
UID:b7a2db19918f93a0c430f53b0333eaba-31
DTSTAMP:19700101T120011Z
DESCRIPTION:Number of near-shortest vectors in lattices
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/31/number-of-near-shortest-vectors-in-lattices/
SUMMARY:For a matrix A, consider the lattice L(A) formed by all integral vectors v in the null-space of A. We ask for which matrices A, the lattice L(A) has only polynomially many near-shortest vectors i.e., vectors whose length is close to the shortest length in L(A). The motivation for this question comes from the fact that we can get a deterministic black-box polynomial identity testing algorithm for any polynomial whose newton polytope has faces described by matrices with the aforementioned property.
We show that when the matrix A is totally unimodular (all sub-determinants are 0,+1,or -1) then the lattice L(A) has only polynomially many near-shortest vectors. The proof of this statement goes via a remarkable theorem of Seymour on a decomposition for totally unimodular matrices. The statement generalizes two earlier known results -- the number of near-shortest cycles and the number of near-shorest cuts in a graph are poly-bounded. As a special case, we get PIT for any polynomial of the form det(sum x_i A_i) for rank-1 matrices A_i.
DTSTART:20191227T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20191227T120000Z
UID:0da1de660786bdde6affa65845583f56-32
DTSTAMP:19700101T120016Z
DESCRIPTION:The Bethe and Sinkhorn Permanents of Structured Matrices and its Implications for Profile Maximum Likelihood
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/32/the-bethe-and-sinkhorn-permanents-of-structured-matrices-and-its-implications-for-profile-maximum-likelihood/
SUMMARY:Here we consider the problem of computing the likelihood of the profile of a discrete distribution, i.e. the probability of observing the multiset of element frequencies, and computing a profile maximum likelihood (PML) distribution, i.e. a distribution with the maximum profile likelihood. For each problem we provide polynomial time algorithms that given $n$ i.i.d samples from a discrete distribution, achieve an approximation factor of $expleft(-O(sqrt{n} log n) right)$, improving upon the previous best-known bound achievable in polynomial time of $exp(-O(n^{2/3} log n))$ (Charikar, Shiragur and Sidford, 2019). Through the work of Acharya, Das, Orlitsky and Suresh (2016), this implies a polynomial time universal estimator for symmetric properties of discrete distributions in a broader range of error parameter.
We achieve these results by providing new bounds on the quality of approximation of the Bethe and Sinkhorn permanents (Vontobel, 2012 and 2014).
We show that each of these are $exp(O(k log(N/k)))$ approximations to the permanent of $N times N$ non-negative matrices with at most $k$ distinct columns, improving upon the previous known bounds of $exp(O(N))$.
To obtain our results on PML,
we exploit the fact that the PML objective is proportional to the permanent of a certain Vandermonde matrix with $sqrt{n}$ distinct columns.
As a by-product of our work we establish a surprising connection between the convex relaxation in prior work (CSS19) and the well-studied Bethe and Sinkhorn approximations.
This is in joint work with Moses Charikar and Aaron Sidford.
DTSTART:20191227T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20200106T120000Z
UID:cf10c9b35443b0b0889ec3ddd4182b42-33
DTSTAMP:19700101T120013Z
DESCRIPTION:Beyond trace reconstruction: Population recovery from the deletion channel
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/33/beyond-trace-reconstruction-population-recovery-from-the-deletion-channel/
SUMMARY:Population recovery is the problem of learning an unknown distribution over an unknown set of n-bit strings, given access to independent draws from the distribution that have been independently corrupted according to some noise channel. Recent work has intensively studied such problems both for the bit-flip and erasure noise channels. We initiate the study of population recovery under the deletion channel, in which each bit is independently deleted with some fixed probability and the surviving bits are concatenated and transmitted, in both worst-case and average-case settings of the strings in the support. This is a generalization of trace reconstruction, a challenging problem that has received much recent attention.
For the worst case, we show that for any s = o(log n / log log n), a population of s strings from {0,1}^n can be learned under deletion channel noise using exp(n^{1/2+o(1)}) samples. On the lower bound side, we show that n^{Omega(s)} samples are required to perform population recovery under the deletion channel, for all s <= n^0.49.
For the average case, we give an efficient algorithm for population recovery. The algorithm runs in time poly(n,s,1/eps) and its sample complexity is poly(s, 1/eps, exp(log^{1/3} n)), where eps is the TV distance between the original and output distributions.
This is based on the following joint work with Frank Ban, Xi Chen, Adam Freilich and Rocco Servedio: https://arxiv.org/abs/1904.05532 https://arxiv.org/abs/1907.05964
DTSTART:20200106T120000Z
END:VEVENT
BEGIN:VEVENT
DTEND:20200103T120000Z
UID:4245907523c9faa543d17aba927cefff-34
DTSTAMP:19700101T120011Z
DESCRIPTION:Architecting Persistent Memory Systems
URL;VALUE=URI:http://demo.onlypixels.com/iisc-csa/event/34/architecting-persistent-memory-systems/
SUMMARY:Persistent Memory (PM) technologies (also known as Non-Volatile RAM, e.g., Intelâ€™s 3D XPoint) offer the exciting possibility of disk-like durability with the performance of main memory. Persistent memory systems provide applications with direct access to storage media via processor load and store instructions rather than having to rely on performance-sapping software intermediaries like the operating system, aiding the development of high-performance, recoverable software. For example, I envision storage software that provides the safety and correctness of a conventional database management system like PostgreSQL and the performance of an in-memory store like Redis. However, todayâ€™s computing systems have been optimized for block storage devices and cannot fully exploit the benefits of PMs. Designing efficient systems for this new storage paradigm requires a careful rethink of computer architectures, programming interfaces, and application software.
While maintaining recoverable data structures in main memory is the central appeal of persistent memories, current systems do not provide efficient mechanisms (if any) to do so. Ensuring the recoverability of these data structures requires constraining the order of PM writes, whereas current architectures are designed to reorder memory accesses, transparent to the programmer, for performance. In this talk, I will introduce recently proposed programming interfaces, called persistency models, that will allow programmers to express the required order of PM writes. Then, I will present my work on developing efficient hardware implementations to enforce the PM write order prescribed by persistency models and tailoring software for these new programming interfaces.
DTSTART:20200103T120000Z
END:VEVENT
END:VCALENDAR