## AI帮你理解科学

## AI 精读

AI抽取本论文的概要总结

微博一下：

# Impossibility Results for Grammar-Compressed Linear Algebra

NIPS 2020, (2020)

EI

关键词

摘要

To handle vast amounts of data, it is natural and popular to compress vectors and matrices. When we compress a vector from size $N$ down to size $n \ll N$, it certainly makes it easier to store and transmit efficiently, but does it also make it easier to process? In this paper we consider lossless compression schemes, and ask if we ca...更多

代码：

数据：

简介

- The idea of using compression to speed up computations can be found in any domain that deals with largescale data, and ML is no exception.
- Given two vectors encoded in this way with size nRLE, a simple one-pass algorithm can compute their inner product in OpnRLE q time.
- The authors prove new hardness reductions showing cases where the time to compute the inner product must be large even when the vectors have very small grammar compressions.

重点内容

- The idea of using compression to speed up computations can be found in any domain that deals with largescale data, and ML is no exception
- The answer depends on the compression scheme
- While for simple ones such as Run-Length-Encoding (RLE) the inner product can be done in Opnq time, we prove that this is impossible for compressions from a richer class: essentially n2 or even larger runtimes are needed in the worst case
- Given two vectors encoded in this way with size nRLE, a simple one-pass algorithm can compute their inner product in OpnRLE q time
- We prove new hardness reductions showing cases where the time to compute the inner product must be large even when the vectors have very small grammar compressions
- While if the encoding is with RLE the product can be computed in OpN nq time, which is linear in the representation size of the matrix and optimal, it turns out that for grammar compressions ΩpN n2q is required

结果

- There are N -dimensional vectors with grammar-compressions of size n “ OpN 1{3q where the inner product must take Ωpn2q time2 to compute.
- Assuming the 3SUM conjecture, the inner product of two N -dimensional vectors that are grammar-compressed to size n q1 cannot be computed in
- While if the encoding is with RLE the product can be computed in OpN nq time, which is linear in the representation size of the matrix and optimal, it turns out that for grammar compressions ΩpN n2q is required.
- Assuming the 3SUM conjecture, the product of an N N -dimensional matrix, where each row is grammar-compressed to size n
- N -dimensional vector that is grammar-compressed to size n cannot be computed in OpN n2 ́εq time where ε ą 0.
- Unlike the above questions that deal with computation time, this is an information-theoretic question, and in Section 5 the authors give strong and unconditional negative answers: the matrix C cannot be grammar-compressed to size opN 2{ log2 N q even when A and B are strongly compressible.
- The authors present the proof of Theorem 1.1 by giving a reduction from 3SUM to the inner product of compressed vectors.
- If the authors assume the Strong 3SUM conjecture, the authors can start with 3SUM instances where U “ Opm2q and get vectors of dimension N “ Oppm log mq3q, ruling out inner product algorithms with time

结论

- For any fixed ε ą 0, let k be a sufficiently large constant integer such that 1{pk 2q ă ε, the Strong kSUM conjecture implies that N -dimensional vectors with compressed size n “ OpN εq cannot have an OpN 1{3 ́δq algorithm for any constant δ ą 0.
- The authors sketch how to prove Theorem 1.2 by giving a reduction from 3SUM to Matrix-Vector multiplication on compressed data.
- When both A, B are given as strong compression, the resulting representation can have a much smaller size, but to compute a single entry Ci,j, the authors first might need to obtain a representation of the row Ai and the column Bj. the authors have several options for representing C:

总结

- The idea of using compression to speed up computations can be found in any domain that deals with largescale data, and ML is no exception.
- Given two vectors encoded in this way with size nRLE, a simple one-pass algorithm can compute their inner product in OpnRLE q time.
- The authors prove new hardness reductions showing cases where the time to compute the inner product must be large even when the vectors have very small grammar compressions.
- There are N -dimensional vectors with grammar-compressions of size n “ OpN 1{3q where the inner product must take Ωpn2q time2 to compute.
- Assuming the 3SUM conjecture, the inner product of two N -dimensional vectors that are grammar-compressed to size n q1 cannot be computed in
- While if the encoding is with RLE the product can be computed in OpN nq time, which is linear in the representation size of the matrix and optimal, it turns out that for grammar compressions ΩpN n2q is required.
- Assuming the 3SUM conjecture, the product of an N N -dimensional matrix, where each row is grammar-compressed to size n
- N -dimensional vector that is grammar-compressed to size n cannot be computed in OpN n2 ́εq time where ε ą 0.
- Unlike the above questions that deal with computation time, this is an information-theoretic question, and in Section 5 the authors give strong and unconditional negative answers: the matrix C cannot be grammar-compressed to size opN 2{ log2 N q even when A and B are strongly compressible.
- The authors present the proof of Theorem 1.1 by giving a reduction from 3SUM to the inner product of compressed vectors.
- If the authors assume the Strong 3SUM conjecture, the authors can start with 3SUM instances where U “ Opm2q and get vectors of dimension N “ Oppm log mq3q, ruling out inner product algorithms with time
- For any fixed ε ą 0, let k be a sufficiently large constant integer such that 1{pk 2q ă ε, the Strong kSUM conjecture implies that N -dimensional vectors with compressed size n “ OpN εq cannot have an OpN 1{3 ́δq algorithm for any constant δ ą 0.
- The authors sketch how to prove Theorem 1.2 by giving a reduction from 3SUM to Matrix-Vector multiplication on compressed data.
- When both A, B are given as strong compression, the resulting representation can have a much smaller size, but to compute a single entry Ci,j, the authors first might need to obtain a representation of the row Ai and the column Bj. the authors have several options for representing C:

- Table1: The potential savings from grammar-compressed linear algebra: Compression rates on real datasets. We compare zip, a standard grammar-compression, with Run Length Encoding (RLE), a simple method that works well on repetitive or sparse data. For more such results, see [35, Table 1]

相关工作

- There have been a few recent works showing fine-grained complexity results for machine learning problems. In particular, [14] showed that the classic algorithm of Viterbi that computes the most likely path in a Hidden Markov Model which results in a given sequence of observations is essentially optimal assuming certain complexity theoretical hypotheses. Another work [13] showed conditional hardness results for multiple empirical risk minimization problems such as kernel support vector machines, kernel ridge regression, and training the final layer of a neural network. Furthermore, there are many works that show hardness for problems that are used in machine learning literature. This includes conditional lower bounds for kernel lowrank approximation [68], closest pair and its variants [9, 75, 88, 24, 29, 28], maximum inner product [6, 22, 23], earth mover’s distance (a.k.a. Wasserstein metric) [74], dynamic time warping distance [3, 17].

基金

- This work is part of the project TIPEA that has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 850979). §Max Planck Institute for Informatics, Saarland Informatics Campus, marvin@mpi-inf.mpg.de 1We use the notation Opnq “ n N op1q for near-linear time, hiding small terms such as log factors

引用论文

- Daniel Abadi, Samuel Madden, and Miguel Ferreira. Integrating compression and execution in columnoriented database systems. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data, pages 671–682, 2006.
- Amir Abboud, Arturs Backurs, Karl Bringmann, and Marvin Kunnemann. Fine-grained complexity of analyzing compressed data: Quantifying improvements over decompress-and-solve. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, Berkeley, CA, USA, October 15-17, 2017, pages 192–203, 2017.
- Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for lcs and other sequence similarity measures. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 59–78. IEEE, 2015.
- Amir Abboud and Kevin Lewi. Exact weight subgraphs and the k-sum conjecture. In International Colloquium on Automata, Languages, and Programming, pages 1–12.
- Amir Abboud, Kevin Lewi, and Ryan Williams. Losing weight by gaining edges. In European Symposium on Algorithms, pages 1–12.
- Amir Abboud, Aviad Rubinstein, and Ryan Williams. Distributed pcp theorems for hardness of approximation in p. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 25–3IEEE, 2017.
- Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower bounds for dynamic problems. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 434–443. IEEE, 2014.
- Amir Abboud, Virginia Vassilevska Williams, and Oren Weimann. Consequences of faster alignment of sequences. In International Colloquium on Automata, Languages, and Programming, pages 39–51.
- Josh Alman and Ryan Williams. Probabilistic polynomials and hamming nearest neighbors. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 136–150. IEEE, 2015.
- A. Amir, T. M. Chan, M. Lewenstein, and N. Lewenstein. On hardness of jumbled indexing. In Proc. ICALP, volume 8572, pages 114–125, 2014.
- Amihood Amir, Gary Benson, and Martin Farach. Let sleeping files lie: Pattern matching in zcompressed files. Journal of Computer and System Sciences, 52(2):299–307, 1996.
- Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009.
- Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks. In Advances in Neural Information Processing Systems, pages 4308–4318, 2017.
- Arturs Backurs and Christos Tzamos. Improving viterbi is hard: Better runtimes imply faster clique algorithms. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 311–321. JMLR. org, 2017.
- Ilya Baran, Erik D Demaine, and Mihai Patrascu. Subquadratic algorithms for 3sum. In Workshop on Algorithms and Data Structures, pages 409–421.
- Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 245–250, 2001.
- Karl Bringmann and Marvin Kunnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 79–97. IEEE, 2015.
- Patrick Cegielski, Irene Guessarian, Yury Lifshits, and Yuri Matiyasevich. Window subsequence problems for compressed texts. In Proc. 1st International Computer Science Symposium in Russia (CSR’06), pages 127–136.
- Timothy M Chan. More logarithmic-factor speedups for 3sum,(median,+)-convolution, and some geometric 3sum-hard problems. ACM Transactions on Algorithms (TALG), 16(1):1–23, 2019.
- Moses Charikar, Eric Lehman, Ding Liu, Rina Panigrahy, Manoj Prabhakaran, Amit Sahai, and Abhi Shelat. The smallest grammar problem. STOC’02 and IEEE Transactions on Information Theory, 51(7):2554–2576, 2005.
- Kuan-Yu Chen, Ping-Hui Hsu, and Kun-Mao Chao. Approximate matching for run-length encoded strings is 3sum-hard. In Annual Symposium on Combinatorial Pattern Matching, pages 168–179.
- Lijie Chen. On the hardness of approximate and exact (bichromatic) maximum inner product. arXiv preprint arXiv:1802.02325, 2018.
- Lijie Chen, Shafi Goldwasser, Kaifeng Lyu, Guy N Rothblum, and Aviad Rubinstein. Fine-grained complexity meets ip= pspace. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1–20. SIAM, 2019.
- Lijie Chen and Ryan Williams. An equivalence class for orthogonal vectors. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 21–40. SIAM, 2019.
- Zhiyuan Chen, Johannes Gehrke, and Flip Korn. Query optimization in compressed database systems. In Proceedings of the 2001 ACM SIGMOD international conference on Management of data, pages 271–282, 2001.
- Tejalal Choudhary, Vipul Mishra, Anurag Goswami, and Jagannathan Sarangapani. A comprehensive survey on model compression and acceleration. Artif. Intell. Rev., 53(7):5113–5155, 2020.
- Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009.
- Karthik CS and Pasin Manurangsi. On closest pair in euclidean metric: Monochromatic is as hard as bichromatic. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
- Roee David and Bundit Laekhanukit. On the complexity of closest pair via polar-pair of point-sets. SIAM Journal on Discrete Mathematics, 33(1):509–527, 2019.
- Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
- Bartlomiej Dudek, Pawel Gawrychowski, and Tatiana Starikovskaya. All non-trivial variants of 3-ldt are equivalent. CoRR, abs/2001.01289, 2020.
- Ahmed Elgohary, Matthias Boehm, Peter J. Haas, Frederick R. Reiss, and Berthold Reinwald. Compressed linear algebra for large-scale machine learning. Proc. VLDB Endow., 9(12):960–971, 2016.
- Ahmed Elgohary, Matthias Boehm, Peter J. Haas, Frederick R. Reiss, and Berthold Reinwald. Scaling machine learning via compressed linear algebra. SIGMOD Rec., 46(1):42–49, 2017.
- Ahmed Elgohary, Matthias Boehm, Peter J. Haas, Frederick R. Reiss, and Berthold Reinwald. Compressed linear algebra for large-scale machine learning. VLDB J., 27(5):719–744, 2018.
- Ahmed Elgohary, Matthias Boehm, Peter J. Haas, Frederick R. Reiss, and Berthold Reinwald. Compressed linear algebra for declarative large-scale machine learning. Commun. ACM, 62(5):83–91, 2019.
- Martin Farach and Mikkel Thorup. String matching in Lempel-Ziv compressed strings. In Proc. 27th Annual ACM Symposium on Theory of Computing (STOC’95), pages 703–712. ACM, 1995.
- Ari Freund. Improved subquadratic 3sum. Algorithmica, 77(2):440–458, 2017.
- Anka Gajentaan and Mark H. Overmars. On a class of opn2q problems in computational geometry. Computational Geometry, 5(3):165–185, 1995.
- Leszek Gasieniec, Marek Karpinski, Wojciech Plandowski, and Wojciech Rytter. Efficient algorithms for Lempel-Ziv encoding. Proc. 5th Scandinavian Workshop on Algorithm Theory (SWAT’96), pages 392–403, 1996.
- Pawel Gawrychowski. Pattern matching in Lempel-Ziv compressed strings: fast, simple, and deterministic. In Proc. 19th Annual European Symposium on Algorithms (ESA’11), pages 421–432.
- Raffaele Giancarlo, Davide Scaturro, and Filippo Utro. Textual data compression in computational biology: a synopsis. Bioinformatics, 25(13):1575–1586, 2009.
- Omer Gold and Micha Sharir. Improved bounds for 3SUM, K-SUM, and linear degeneracy. CoRR, abs/1512.05279, 2015.
- Isaac Goldstein, Tsvi Kopelowitz, Moshe Lewenstein, and Ely Porat. How hard is it to find (honest) witnesses? arXiv preprint arXiv:1706.05815, 2017.
- Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Yoshua Bengio and Yann LeCun, editors, Proc. 4th International Conference on Learning Representations, ICLR 2016, 2016.
- Danny Hermelin, Gad M Landau, Shir Landau, and Oren Weimann. Unified compression-based acceleration of edit-distance computation. Algorithmica, 65(2):339–353, 2013.
- Balakrishna R Iyer and David Wilhite. Data compression support in databases. In VLDB, volume 94, pages 695–704, 1994.
- Klaus Jansen, Felix Land, and Kati Land. Bounding the running time of algorithms for scheduling and packing problems. SIAM J. Discret. Math., 30(1):343–366, 2016.
- Artur Jez. Approximation of grammar-based compression via recompression. Theoretical Computer Science, 592:115–134, 2015.
- Artur Jez. Faster fully compressed pattern matching by recompression. ACM Transactions on Algorithms (TALG), 11(3):20, 2015.
- Artur Jez. A really simple approximation of smallest grammar. Theoretical Computer Science, 616:141– 150, 2016.
- Allan Grønlund Jørgensen and Seth Pettie. Threesomes, degenerates, and love triangles. In Proc. of the 55th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 621–630, 2014.
- Vasileios Karakasis, Theodoros Gkountouvas, Kornilios Kourtis, Georgios Goumas, and Nectarios Koziris. An extended compression format for the optimization of sparse matrix-vector multiplication. IEEE Transactions on Parallel and Distributed Systems, 24(10):1930–1940, 2012.
- Marek Karpinski, Wojciech Rytter, and Ayumi Shinohara. Pattern-matching for strings with short descriptions. In Proc. Annual Symposium on Combinatorial Pattern Matching (CPM’95), pages 205– 214.
- John C. Kieffer and En-Hui Yang. Grammar-based codes: A new class of universal lossless source codes. IEEE Trans. Inf. Theory, 46(3):737–754, 2000.
- Tsvi Kopelowitz, Seth Pettie, and Ely Porat. Higher lower bounds from the 3sum conjecture. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 1272– 1287. SIAM, 2016.
- Kornilios Kourtis, Georgios Goumas, and Nectarios Koziris. Optimizing sparse matrix-vector multiplication using index and value compression. In Proceedings of the 5th conference on Computing frontiers, pages 87–96, 2008.
- N Jesper Larsson. Structures of string matching and data compression. Department of Computer Science, Lund University, 1999.
- Abraham Lempel and Jacob Ziv. On the complexity of finite sequences. IEEE Transactions on Information Theory, 22(1):75–81, 1976.
- Fengan Li, Lingjiao Chen, Arun Kumar, Jeffrey F Naughton, Jignesh M Patel, and Xi Wu. When lempel-ziv-welch meets machine learning: A case study of accelerating machine learning using coding. arXiv preprint arXiv:1702.06943, 2017.
- Yury Lifshits. Processing compressed texts: A tractability border. In Bin Ma and Kaizhong Zhang, editors, Proc. 18th Annual Symposium on Combinatorial Pattern Matching (CPM 2007), volume 4580 of Lecture Notes in Computer Science, pages 228–240.
- Yury Lifshits, Shay Mozes, Oren Weimann, and Michal Ziv-Ukelson. Speeding up hmm decoding and training by exploiting sequence repetitions. Algorithmica, 54(3):379–399, 2009.
- Andrea Lincoln, Virginia Vassilevska Williams, Joshua R. Wang, and R. Ryan Williams. Deterministic time-space trade-offs for k-sum. In International Colloquium on Automata, Languages, and Programming, pages 58:1–58:14, 2016.
- Qi Liu, Yu Yang, Chun Chen, Jiajun Bu, Yin Zhang, and Xiuzi Ye. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure. BMC bioinformatics, 9(1):176, 2008.
- Markus Lohrey. Algorithmics on slp-compressed strings: A survey. Groups Complexity Cryptology, 4(2):241–299, 2012.
- Alaa Maalouf, Ibrahim Jubran, and Dan Feldman. Fast and accurate least-mean-squares solvers. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8305– 8316, 2019.
- Sebastian Maneth and Fabian Peternek. A survey on methods and systems for graph compression. arXiv preprint arXiv:1504.00616, 2015.
- Sebastian Maneth and Fabian Peternek. Grammar-based graph compression. Information Systems, 76:19–45, 2018.
- Cameron Musco and David Woodruff. Is input sparsity time possible for kernel low-rank approximation? In Advances in Neural Information Processing Systems, pages 4435–4445, 2017.
- Craig G Nevill-Manning and Ian H Witten. Compression and explanation using hierarchical grammars. The Computer Journal, 40(2 and 3):103–116, 1997.
- Hristo S. Paskov, Robert West, John C. Mitchell, and Trevor J. Hastie. Compressive feature learning. In Christopher J. C. Burges, Leon Bottou, Zoubin Ghahramani, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems, pages 2931–2939, 2013.
- Wojciech Plandowski. Testing equivalence of morphisms on context-free languages. Proc. 2nd Annual European Symposium on Algorithms (ESA’94), pages 460–470, 1994.
- Mihai Patrascu. Towards polynomial lower bounds for dynamic problems. In Proc. of the 42nd Annual ACM Symposium on Theory Of Computing (STOC), pages 603–610, 2010.
- Roberto Radicioni and Alberto Bertoni. Grammatical compression: compressed equivalence and other problems. Discrete Mathematics and Theoretical Computer Science, 12(4):109, 2010.
- Dhruv Rohatgi. Conditional hardness of earth mover distance. arXiv preprint arXiv:1909.11068, 2019.
- Aviad Rubinstein. Hardness of approximate nearest neighbor search. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1260–1268, 2018.
- Wojciech Rytter. Application of Lempel–Ziv factorization to the approximation of grammar-based compression. Theoretical Computer Science, 302(1-3):211–222, 2003.
- Wojciech Rytter. Grammar compression, lz-encodings, and string algorithms with implicit input. In Proc. 31st International Colloquium on Automata, Languages, and Programming (ICALP’04), pages 15–27.
- Yousef Saad. Iterative methods for sparse linear systems, volume 82. siam, 2003.
- Hiroshi Sakamoto. A fully linear-time approximation algorithm for grammar-based compression. Journal of Discrete Algorithms, 3(2):416–430, 2005.
- Hiroshi Sakamoto. Grammar compression: Grammatical inference by compression and its application to real data. In ICGI, pages 3–20, 2014.
- D Sculley and Carla E Brodley. Compression and machine learning: A new perspective on feature space vectors. In Proc. Data Compression Conference (DCC’06), pages 332–341, 2006.
- Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. Technical report, Technical Report DOI-TR-161, Department of Informatics, Kyushu University, 1999.
- Yasuo Tabei, Hiroto Saigo, Yoshihiro Yamanishi, and Simon J Puglisi. Scalable partial least squares regression on grammar-compressed data matrices. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1875–1884, 2016.
- Kedar Tatwawadi, Mikel Hernaez, Idoia Ochoa, and Tsachy Weissman. Gtrac: fast retrieval from compressed collections of genomic variants. Bioinformatics, 32(17):i479–i486, 2016.
- Virginia Vassilevska and Ryan Williams. Finding, minimizing, and counting weighted subgraphs. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 455–464, 2009.
- Terry A. Welch. A technique for high-performance data compression. Computer, 6(17):8–19, 1984.
- Till Westmann, Donald Kossmann, Sven Helmer, and Guido Moerkotte. The implementation and performance of compressed databases. ACM Sigmod Record, 29(3):55–67, 2000.
- Ryan Williams. On the difference between closest, furthest, and orthogonal pairs: Nearly-linear vs barely-subquadratic complexity. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1207–1215. SIAM, 2018.
- Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the ICM, volume 3, pages 3431–3472. World Scientific, 2018.
- Ian H Witten, Alistair Moffat, and Timothy C Bell. Managing gigabytes: compressing and indexing documents and images. Morgan Kaufmann, 1999.
- Jacob Ziv and Abraham Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, 23(3):337–343, 1977.
- 2. We can compute compressions of v1

标签

评论