Archive

Archive for December, 2016

Failed projects + the Cloud = Software reuse

December 26, 2016 No comments

Code reuse is one of those things that sounds like a winning idea to those outside of software development; those who write software for a living are happy to reuse other peoples’ code but don’t want the hassle involved with others reusing their own code. From the management point of view, where is the benefit in having your developers help others get their product out the door when they should be working towards getting your product out the door?

Lots of projects get canceled after significant chunks of software have been produced, some of it working. It would be great to get some return on this investment, but the likely income from selling software components is rarely large enough to make it worthwhile investing the necessary resources. The attractions of the next project are soon appear more enticing than hanging around baby-sitting software from a cancelled project.

Cloud services, e.g., AWS and Azure to name two, look like they will have a big impact on code reuse. The components of a failed project, i.e., those bits that work tolerably well, can be packaged up as a service and sold/licensed to other companies using the same cloud provider. Companies are already offering a wide variety of third-party cloud services, presumably the new software got written because no equivalent services was currently available on the provider’s cloud; well perhaps others are looking for just this service.

The upfront cost of sales is minimal, the services your failed re-purposed software provides get listed in various service directories. The software can just sit there waiting for customers to come along, or you could put some effort into drumming up customers. If sales pick up, it may become worthwhile offering support and even making enhancements.

What about the software built for non-failed projects? Software is a force multiplier and anybody working on a non-failed project wants to use this multiplier for their own benefit, not spend time making it available for others (I’m not talking about creating third-party APIs).

Categories: Uncategorized Tags: , , ,

Is sorting a list of names racial discrimination?

December 21, 2016 No comments

Governments are starting to notice the large, and growing, role that algorithms have in the everyday life of millions of people. There is now an EU regulation, EU 2016/679, covering “… the protection of natural persons with regard to the processing of personal data…”

The wording in Article 22 has generated some waves: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”

But I think something much bigger is tucked away in a subsection of Article 14 paragraph 2 “…the controller shall provide the data subject with the following information…”, subsection (g) “…meaningful information about the logic involved…” Explaining the program logic involved to managers who are supposed to have some basic ability for rational thought is hard enough, but the general public?

It is not necessary for the general public acquire a basic understanding of the logic behind some of the decisions made by computers, rabble-rousing by sections of the press and social media can have a big impact.

A few years ago I was very happy to see a noticeable reduction in my car insurance. This reduction was not the result of anything I had done, but because insurance companies were no longer permitted to discriminate on the basic of gender; men had previously paid higher car insurance premiums because the data showed they were a higher risk than women (who used to pay lower premiums). At last, some of the crazy stuff done in the name of gender equality benefited men.

Sorting would appear to be discrimination free, but ask any taxi driver about appearing first in a list of taxi phone numbers. Taxi companies are not called A1, AA, AAA because the owners are illiterate, they know all too well the power of appearing at the front of a list.

If you are in the market for a compiler writer whose surname starts with J (I have seen people make choices with less rationale than this), the following is obviously the most desirable expert listing (I don’t know any compiler writers called Kurt or Adalene):

Jones, Derek
Jönes, Kurt
Jônes, Adalene

Now Kurt might object, pointing out that in German the letter ö is sorted as if it had been written oe, which means that Jönes gets to be sorted before Jones (in Estonian, Hungarian and Swedish, Jones appears first).

What about Adalene? French does not contain the letter ö, so who is to say she should be sorted after Kurt? Unicode specifies a collation algorithm, but we are in the realm of public opinion here, not having a techy debate.

This issue could be resolved in the UK by creating a brexit locale specifying that good old English letters always sort before Jonny foreigner letters.

Would use of such a brexit locale be permitted under EU 2016/679 (assuming the UK keeps this regulation), or would it be treated as racial discrimination?

I certainly would not want to be the person having to explain to the public the logic behind collation sequences and sort locales.

Categories: Uncategorized Tags: , ,

Automatically generated join-the-dots images

December 16, 2016 1 comment

It is interesting to try and figure out what picture emerges from a join-the-dots puzzle (connect-the-dots in some parts of the world). Let’s have a go at some lightweight automatic generation such a puzzle (some heavy-weight techniques).

If an image is available, expressed as an boolean matrix, R’s sample function can be used to select a small percentage of the black points.

Taking the output of the following equation:

x=seq(-4.7, 4.7, by=0.002)
 
y1 = c(1,-.7,.5)*sqrt(c(1.3, 2,.3)^2 - x^2) - c(.6,1.5,1.75)  # 3
y2 =0.6*sqrt(4 - x^2)-1.5/as.numeric(1.3 <= abs(x))           # 1
y3 = c(1,-1,1,-1,-1)*sqrt(c(.4,.4,.1,.1,.8)^2 -(abs(x)-c(.5,.5,.4,.4,.3))^2) - c(.6,.6,.6,.6,1.5) # 5
y4 =(c(.5,.5,1,.75)*tan(pi/c(4, 5, 4, 5)*(abs(x)-c(1.2,3,1.2,3)))+c(-.1,3.05, 0, 2.6))/
	as.numeric(c(1.2,.8,1.2,1) <= abs(x) & abs(x) <= c(3,3, 2.7, 2.7))                 # 4
y5 =(1.5*sqrt(x^2 +.04) + x^2 - 2.4) / as.numeric(abs(x) <= .3)                            # 1
y6 = (2*abs(abs(x)-.1) + 2*abs(abs(x)-.3)-3.1)/as.numeric(abs(x) <= .4)                    # 1
y7 =(-.3*(abs(x)-c(1.6,1,.4))^2 -c(1.6,1.9, 2.1))/
	as.numeric(c(.9,.7,.6) <= abs(x) & abs(x) <= c(2.6, 2.3, 2))                       # 3

and sampling 300 of the 20,012 points we get images such as the following:

Sampled rabbit image

A relatively large sample size is needed to reduce the possibility that a random selection fails to return any points within a significant area, but we do end up with many points clustered here and there.

library("plyr")
 
rab_points=adply(x, 1, function(X) data.frame(x=rep(X, 18), y=c(
	c(1, -0.7, 0.5)*sqrt(c(1.3, 2, 0.3)^2-X^2) - c(0.6, 1.5 ,1.75),
	0.6*sqrt(4 - X^2)-1.5/as.numeric(1.3 <= abs(X)),
	c(1, -1, 1, -1, -1)*sqrt(c(0.4, 0.4, 0.1, 0.1, 0.8)^2-(abs(X)-c(0.5, 0.5, 0.4, 0.4, 0.3))^2) - c(0.6, 0.6, 0.6, 0.6, 1.5),
	(c(0.5, 0.5, 1, 0.75)*tan(pi/c(4, 5, 4, 5)*(abs(X)-c(1.2, 3, 1.2, 3)))+c(-0.1, 3.05, 0, 2.6))/
		as.numeric(c(1.2, 0.8, 1.2, 1) <= abs(X) & abs(X) <= c(3,3, 2.7, 2.7)),
	(1.5*sqrt(X^2+0.04) + X^2 - 2.4) / as.numeric(abs(X) <= 0.3),
	(2*abs(abs(X)-0.1)+2*abs(abs(X)-0.3)-3.1)/as.numeric(abs(X) <= 0.4),
	(-0.3*(abs(X)-c(1.6, 1, 0.4))^2-c(1.6, 1.9, 2.1))/
		as.numeric(c(0.9, 0.7, 0.6) <= abs(X) & abs(X) <= c(2.6, 2.3, 2))
						)))
rab_points$X1=NULL
rb=subset(rab_points, (!is.na(x)) & (!is.na(y) & is.finite(y)))
 
x=sample.int(nrow(rb), 300)
plot(rb$x[x], rb$y[x],
	bty="n", xaxt="n", yaxt="n", pch=4, cex=0.5, xlab="", ylab="")

A more uniform image can produced by removing all points less than a given distance from some selected set of points. In this case the point in the first element is chosen, everything close to it removed and the the processed repeated with the second element (still remaining) and so on.

rm_nearest=function(jp)
{
keep=((dot_im$x[(jp+1):(jp+window_size)]-dot_im$x[jp])^2+
      (dot_im$y[(jp+1):(jp+window_size)]-dot_im$y[jp])^2) < min_dist
keep=c(keep, TRUE) # make sure which has something to return
return(jp+which(keep))
}
 
window_size=500
cur_jp=1
dot_im=rb
 
while (cur_jp <= nrow(dot_im))
   {
#   min_dist=0.05+0.50*runif(window_size)
   min_dist=0.05+0.30*runif(1)
   dot_im=dot_im[-rm_nearest(cur_jp), ]
   cur_jp=cur_jp+1
   }
 
plot(dot_im$x, dot_im$y,
	bty="n", xaxt="n", yaxt="n", pch=4, cex=0.5, xlab="", ylab="")

Since R supports vector operations I want to do everything without using loops or if-statements. Yes, there is a while loop :-(, alternative, simple, non-loop suggestions welcome.

Removing points with an average squared distance less than 0.3 and 0.5 we get (with around 135-155 points) the images:

Remove closest points rabbit image

I was going to come up with a scheme for adding numbers, perhaps I will do this in another post.

Click for more equations generating images.

Categories: Uncategorized Tags: , ,

Christmas books for 2016

December 5, 2016 No comments

Here are couple of suggestions for books this Christmas. As always, the timing of the books I suggest is based on when they reach the top of the books-to-read pile, not when they were published.

“The Utopia of rules” by David Graeber (who also wrote the highly recommended “Debt : The First 5000 Years”). Full of eye opening insights into bureaucracy, how the ‘free’ world’s state apparatus came to have its current form and how various cultures have reacted to the imposition of bureaucratic rules. Very readable.

“How Apollo Flew to the Moon” by W. David Woods. This is a technical nuts-and-bolts story of how Apollo got to the moon and back. It is the best book I have every read on the subject, and as a teenager during the Apollo missions I read all the books I could find.

This year’s blog find was Scott Adams’ blog (yes, he of Dilbert fame). I had been watching Donald Trump’s rise for about a year and understood that almost everything he said was designed to appeal to a specific audience and the fact that it sounded crazy to those not in the target audience was irrelevant. I found Scott’s blog contained lots of interesting insights of the goings on in the US election; the insights into why Trump was saying the things he said have proved to be spot on.

For those of you interested in theoretical physics I ought to mention Backreaction (regular updates, primarily about gravity related topics) and Of Particular Significance (sporadic updates and primarily about particle physics)

Categories: Uncategorized Tags: , ,