This thesis examines a modern concept for machine numbers based on interval arithmetic called 'Unums' and compares it to IEEE 754 floating-point arithmetic, evaluating possible uses of this format where floating-point numbers are inadequate. In the course of this examination, this thesis builds theoretical foundations for IEEE 754 floating-point numbers, interval arithmetic based on the projectively extended real numbers and Unums.

There isn't really a conclusion:

In the end, it all boils down to the question if using two 32 bit IEEE 754 floating-point numbers to model such a closed interval is better than using a single 64 bit IEEE 754 floating-point number for a diverse set of algorithms. The strategy of finding a solution by going from a coarse to a fine grid for Unums could be easily realised with floating-point bounded closed intervals and also prove to be useful for certain applications.

A common question about hard problems is how easy to you have to make them before they really do become easy. The latest surprise is that even "easy" jigsaw puzzles are hard:

We prove the computational intractability of rotating and placing n square tiles into a 1×n array such that adjacent tiles are compatible--either equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes, as in jigsaw puzzles.

Beyond basic NP-hardness, we prove that it is NP-hard even to approximately maximize the number of placed tiles (allowing blanks), while satisfying the compatibility constraint between nonblank tiles, within a factor of 0.9999999851. (On the other hand, there is an easy 12-approximation.)

This is the first (correct) proof of inapproximability for edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of distinguishing, for a directed graph on n nodes, between having a Hamiltonian path (length n−1) and having at most 0.999999284(n−1) edges that form a vertex-disjoint union of paths. We use this gap hardness and gap-preserving reductions to establish similar gap hardness for 1×n jigsaw and edge-matching puzzles.

If you aren't sure what a edge-matching puzzle is the following figure taken from the paper will help:

The paper also mentions:

A recent popular (unsigned) edge-matching puzzle is Eternity II, which featured a US$2,000,000 prize for the first solver (before 2011). The puzzle remains unsolved (except presumably by its creator, Christopher Monckton). The best partial solution to date either places 247 out of the 256 pieces without error, or places all 256 pieces while correctly matching 467 out of 480 edges.

The most important question in computational theory at the current time is the P=NP issue. On this the security of most of our digital transactions depends not to mention a lot of other questions in complexity theory. I Programmer's favourite complexity theorist Scott Aaronson has just published an up-to-date and very readable survey of where the whole issue stands:

In 1955, John Nash sent a remarkable letter to the National Security Agency, in which— seeking to build theoretical foundations for cryptography—he all but formulated what today we call the P ?= NP problem, considered one of the great open problems of science.

Here I survey the status of this problem in 2017, for a broad audience of mathematicians, scientists, and engineers. I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P ̸= NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last halfcentury toward solving those problems. The discussion of progress includes diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.

If you have any clever/crackpot theories that prove that P=NP then you at least need to read and understand all 116 pages before you say anything to the world. Of course. if you have a practical way to reduce NP problems to P then. personally. I would say nothing and get on and enjoy your new found freedom before someone else discovers the same thing.

There's a new Natural Language Processing library for Apache Spark, that extends the Machine Learning logic of Spark to provide NLP annotations for machine learning pipelines that scale in a distribut [ ... ]