So I had just started reading SICP. It’s this amazing book that was used
to teach Comp Sc. to students who had never programmed before at MIT.
When I thought that I should write a blog post with a solution for every
exercise I attempt. So I am gonna write solutions for all the exercise
in SICP. My plan is to finish this book in one year.
This is the third Sicp Exercise, and we finally start writing scheme programs!
So without further delay…
The Question
Exercise 1.03: Define a procedure that takes three numbers as arguments
and returns the sum of the squares of the two larger numbers.
Thoughts
So here we will most likely will have to copy the code sum of squares (make copying a habit. Half of what programmers do is copy pasting. The other half is understanding
what you are copy-pasting.) We will also have to make a test that will take two of the
largest no.s)
Exercise 1.04: Observe that our model of evaluation allows for
more combinations whose operators are compound expressions. Use this
observation to describe the behavior of the following procedure:
(define (a-plus-abs-b a b)
((if (> b 0) + - ) a b))
The Answer
Now in this case, all that is being done is:
Check if b is positive or negative
And run (+ a b) if b is positive, or run (- a b) if negative.
In the if statement, notice that it is surrounded by two brackets
instead of one. This is how the returned function is run.
This is the 5th Exercise of SICP. Here we compare a process
when run on a Applicative order interpreter and when run on
a Normal-order interpreter.
The Question
Exercise 1.05: Ben Bitdiddle has invented a test to determine
whether the interpreter he is faced with is using applicative-order
evaluation or normal-order evaluation. He defines the following
two procedures:
(define (p) (p))
(define (test x y)
(if (= x 0) 0 y))
This the 6th Sicp exercise. This in my opinion is a
tricky question which takes a bit of work to figure out.
So getting to the matter:
The Question
Exercise 1.06: Alyssa P. Hacker doesn’t see why if needs to be
provided as a special form. “Why can’t I just define it as an ordinary
procedure in terms of cond?” she asks. Alyssa’s friend Eva Lu Ator
claims this can indeed be done, and she defines a new version of if:
Exercise 1.07:The good-enough? test used in computing square
roots will not be very effective for finding the square roots of very
small numbers. Also, in real computers, arithmetic operations are
almost always performed with limited precision. This makes our
test inadequate for very large numbers. Explain these statements,
with examples showing how the test fails for small and large numbers.
An alternative strategy for implementing good-enough? is to watch
how guess changes from one iteration to the next and to
stop when the change is a very small fraction of the guess. Design
a square-root procedure that uses this kind of end test. Does this
work better for small and large numbers?
In this Exercise, we write a procedure for cube roots.
But, it is not very different from the square-root procedure.
The Question
Exercise 1.08: Newton’s method for cube roots is based on the fact
that if y is an approximation to the cube root of x, then a better
approximation is given by the value:
$$ \frac{x/y^{2} + 2y}{3} $$
Use this formula to implement a cube-root procedure analogous
to the square-root procedure.
This is the First question in Section to of
Chapter one in Sicp. Here we are comparing
recursion and iteration.
The Question
**Exercise 1.09:**Each of the following two procedures defines
a method for adding two positive integers in terms of the
procedures inc, which increments its argument by 1, and dec,
which decrements its argument by 1.
(define (+ a b)
(if (= a 0)
b
(inc (+ (dec a) b))))
(define (+ a b)
(if (= a 0)
b
(+ (dec a) (inc b))))
Using the substitution model, illustrate the process generated by
each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?
This is the $11^{th}$ exercise in Sicp.
Here we write an iterative process and a
recursive process for the same function.
The Question
Exercise 1.11: A function f is defined by the rule that $f (n) = n$ if
n < 3$ and $f (n) = f (n − 1) + 2f (n − 2) + 3f (n − 3)$ if $n ≥ 3$. Write a
procedure that computes f by means of a recursive process. Write
a procedure that computes f by means of an iterative process.
This is the $12^{th}$ exercise in Sicp. In this Exercise,
we compute pascal’s Triangle.
The Question
Exercise 1.12: The following pattern of numbers is called Pascals’ Triangle.
The numbers at the edge of the triangle are all 1, and each number
inside the triangle is the sum of the two numbers above it. Write
a procedure that computes elements of Pascal’s triangle by means
of a recursive process.
This is the 13 exercise in Sicp. Here we prove
that Fib(n) is equal to varphi divided by root 5.
The Question
Exercise 13: Prove that Fib(n) is the closest integer to $ \varphi^{n}/\sqrt[]{5} $ ,
where $ \varphi = (1 + \sqrt[]{5})/2 $ Hint : Let $\psi = (1 − \sqrt[]{5})/2$. Use induction
and the definition of the Fibonacci numbers (see Section 1.2.2) to
prove that Fib(n) = $(\varphi^{n} − \psi^{n})/\sqrt[]{5}$.
This the 14th exercise in Sicp. Here, we draw
a tree of the count-change procedure, and find
how the space required and the number of steps
grows.
The Question
Exercise 1.14: Draw the tree illustrating the process generated by
the count-change procedure of Section 1.2.2 in making change for
11 cents. What are the orders of growth of the space and number of
steps used by this process as the amount to be changed increases?
This is the 16th Question. Here we design and iterative
procedure for exponentiation which uses a logarithmic number
of steps.
The Question
Exercise 1.16: Design a procedure that evolves an iterative
exponentiation process that uses successive squaring and uses a
logarithmic number of steps, as does fast-expt. (Hint: Using the
observation that $(b^{n/2})^{2} = (b^{2})^{n/2}$, keep along with
the exponent n and the base b, an additional state variable a,
define the state transformation in such a way that the product $ab^{n}$
is unchanged from state to state. At the beginning of process $a$ at the
end of the process. In general, the technique of defining an invariant
quantity that remains unchanged from state to state is a powerful way
to think about the design of iterative algorithms.)
This is the 17 exercise of Sicp and I am posting after a long time. I
hope that this question will be interesting.
The Question
Exercise 1.17: The exponentiation algorithms in this section are based on performing
exponentiation by means of repeated multiplication. In a similar way, one can perform
integer multiplication by means of repeated addition. The following multiplication
procedure (in which it is assumed that our language can only add, not multiply) is analogous
to the expt procedure:
This is the $18^{th}$ question in Sicp. And it quite similar
to the 16th question.
The Question
Exercise 1.18: Using the results of Exercise 1.16 and Exercise 1.17,
devise a procedure that generates an iterative process for multiplying two intergers in the form
of adding, doubling, halving, and uses a logarithmic number of steps.
My Thoughts
This is quite direct. Write an iterative procedure that multiplies in
a logarithmic number of steps. A combintation of Exercise 16 and 17.
Now, this question is not as easy as the last.
This is the $19^{th}$ from Sicp.
The Question
Exercise 1.19 There is a clever algorithm for computing the Fibonacci
numbers in a logarithmic number of steps. Recall the transformation of the
state variables a and b in the fib-iter process of Section 1.2.2:
$ a \leftarrow a + b $ and $ b \leftarrow a$. Call this transformation
$T$ , and observe that applying $T$ over and over again $n$ times,
starting with 1 and 0, produces the pair Fib( n + 1) and Fib( n ). In
other words, the Fibonacci numbers are produced by applying $T^{n}$ ,
the $n^{th}$ power of the transformation $T$ , starting with the pair
(1, 0). Now consider $T$ to be the special case of $p = 0$ and $q = 1$
in a family of transformations $T_{pq}$ , where $T_ {pq}$ transforms the
pair $( a, b )$ according to $a \leftarrow bq + aq + ap$ and $b \leftarrow
bp + aq$ . Show that if we apply such a transformation $T_{pq}$ twice,
the effect is the same as using a single transformation $T_ {pq}$ of
the same form, and compute $p’$ and $q’$ in terms of $p$ and $q$ .
This gives us an explicit way to square these transformations, and
thus we can compute $T_{n}$ using successive squaring, as in the fast-expt
procedure. Put this all together to complete the following procedure,
which runs in a logarithmic number of steps:
This is the $20^{th}$ question in Sicp. In general I feel that
this is a rather easy question to the previous ones.
The Question
Exercise 1.20: The process that a procedure generates is of course
dependent on the rules used by the interpreter. As an example,
consider the iterative gcd procedure given above. Suppose we
were to interpret this procedure using normal-order evaluation,
as discussed in Section 1.1.5. (The normal-order-evaluation rule
for if is described in Exercise 1.5.) Using the substitution method
(for normal order), illustrate the process generated in evaluating
(gcd 206 40) and indicate the remainder operations that are
actually performed. How many remainder operations are actually
performed in the normal-order evaluation of (gcd 206 40) ? In
the applicative-order evaluation?
This is the 21 first question from Sicp and I feel this exercise is
very simple compared to the others after this.
The Question
Exercise 1.21: Use the smallest-divisor procedure to find the
smallest divisor of each of the following numbers: 199, 1999, 19999
The Answer
This question is quite simple: Evaluate this code in a REPL.
So here’s the code for smallest-divisor:
(define (smallest-divisor n)
(find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b)
(= (remainder b a) 0))
This the 22nd quesion in Sicp. I feel that this question is much more
interesting than lot of the others, because we finally get play with
the runtime ! (I wonder how you interfer with memory management.)
The Quesion
Exercise 1.22: Most Lisp implementations include a primitive
called runtime that returns an integer that specifies the amount
of time the system has been running (measured, for example,
in microseconds). The following timed-prime-test procedure,
when called with an integer n , prints n and checks to see if n is
prime. If n is prime, the procedure prints three asterisks followed
by the amount of time used in performing the test.
Exercise 1.23: The smallest-divisor procedure shown at the
start of this section does lots of needless testing: After it checks to
see if the number is divisible by 2 there is no point in checking to
see if it is divisible by any larger even numbers. This suggests that
the values used for test-divisor should not be 2, 3, 4, 5, 6, . . . ,
but rather 2, 3, 5, 7, 9, . . . . To implement this change, define a pro-
cedure next that returns 3 if its input is equal to 2 and otherwise
returns its input plus 2. Modify the smallest-divisor procedure
to use (next test-divisor) instead of (+ test-divisor 1) .
With timed-prime-test incorporating this modified version of
smallest-divisor , run the test for each of the 12 primes found
in Exercise 1.22. Since this modification halves the number of
test steps, you should expect it to run about twice as fast. Is this
expectation confirmed? If not, what is the observed ratio of the
speeds of the two algorithms, and how do you explain the fact that
it is different from 2?
This is the 24th question from Sicp. And yet again, I feel this is a
really easy question.
The Question
Exercise 1.24: Modify the timed-prime-test procedure of Exercise
1.22 to use fast-prime?(the Fermat method), and the test each of the
12 primes you found in that exercise. Since the Fermat test has
$\theta(\log n)$ growth, how would you expect the time to test primes
near 1,000,000 to compare with the time needed to test primes near
1000? Do your data bear this out? Can you explain any discrepancy you
find?
This is the 25th question from Sicp. I personally find that this
question is really interesting.
The Question
Exercise 1.25: Alyssa P. Hacker complains that we went to a lot of
extra work in writing expmod . After all, she says, since we already
know how to compute exponentials, we could have simply written
(define (expmod base exp m)
(remainder (fast-expt base exp) m))
Is she correct? Would this procedure serve as well for our fast prime
tester? Explain.
This is the 26th Question from Sicp. This question is about Order of
Growth and what not.
The Question
Exercise 1.26: Louis Reasoner is having great difficulty doing
Exercise 1.24. His fast-prime? test seems to run more slowly than his
prime? test. Louis calls his friend Eva Lu Ator over to help.
When they examine Louis’s code, they find that he has rewritten
the expmod procedure to use an explicit multiplication, rather than
calling square :
This is the 27th question from Sicp. Here we just Run a procedure.
The Question
Exercise 1.27: Demonstrate that the Carmichael numbers listed in
Footnote 1.47 really do fool the Fermat test. That is, write a proce-
dure that takes an integer n and tests whether a n is congruent to
a modulo n for every a < n , and try your procedure on the given
Carmichael numbers.
My thoughts
This is also rather simple. We just need to use the Fermat Test and
prove that is is fooled by the Carmichael numbers. Here is Footnote 47
for out reference.
This is the 28th Exercise from Sicp. Here we implement an unfoolable
version of the Fermat test called the Miller Rabin Test.
The Question
Exercise 1.28: One variant of the Fermat test that cannot be fooled
is called the Miller-Rabin test (Miller 1976; Rabin 1980). This starts
from an alternate form of Fermat’s Little Theorem, which states
that if $n$ is a prime number and $a$ is any positive integer less than $n$
,then a raised to the $( n − 1)$-st power is congruent to 1 modulo n . To
test the primality of a number n by the Miller-Rabin test, we pick a
random number $a < n$ and raise $a$ to the $( n − 1)$-st power modulo
$n$ using the expmod procedure. However, whenever we perform the
squaring step in expmod , we check to see if we have discovered a
“nontrivial square root of 1 modulo $n$ ,” that is, a number not equal
to 1 or n − 1 whose square is equal to 1 modulo n . It is possible to
prove that if such a nontrivial square root of 1 exists, then $n$ is not
prime. It is also possible to prove that if $n$ is an odd number that
is not prime, then, for at least half the numbers $a < n$ , computing
$a n− 1$ in this way will reveal a nontrivial square root of 1 modulo $n$ .
(This is why the Miller-Rabin test cannot be fooled.) Modify the
expmod procedure to signal if it discovers a nontrivial square root
of 1, and use this to implement the Miller-Rabin test with a proce-
dure analogous to fermat-test . Check your procedure by testing
various known primes and non-primes. Hint: One convenient way
to make expmod signal is to have it return 0.
This is the $29^{th}$ exercise in Sicp. Here, we a write a procedure
to compute integrals using (Homer) Simpson’s rule.
The Question
Exercise 1.29: Simpson’s Rule is a more accurate method of
numerical integration than the method illustrated above. Using
Simpson’s Rule, the integral of a function f between a and b is
approximated as
This is the $30^{th}$ Exercise from Sicp. Here, we write an iterative
version of `sum**.
The Question
Exercise 1.30: The sum procedure above generates a linear
recursion. The procedure can be rewritten so that the sum is performed
iteratively. Show how to do this by filling in the missing expressions
in the following definition:
(define (sum term a next b)
(define (iter a result)
(if 〈 ?? 〉〈 ?? 〉 (iter〈 ?? 〉〈 ?? 〉 )))
(iter〈 ?? 〉〈 ?? 〉 ))
The Answer
This is a rather simple question so I am just gonna quickly give an
explanation and an answer:
This is the $31^{th}$ Exercise from Sicp. Here, we define a procedure
call product analogous to a sum. Using product, we define
factorial and compute approximations of $\pi$
The Question
Exercise 1.31:
A) The sum procedure is only the simplest of a vast number of
similar abstractions that can be captured as higher-order
procedures. Write an analogous procedure called product
that returns the product of the values of a function at points
over a given range. Show how to define factorial in terms of
product . Also use product to compute approximations to π
using the formula
This is the $32^{th}$ exercise from Sicp. Here we create a function
that creates functions like the sum and product procedures from
the previous exercises.
The Question
Exercise 1.32:
A) Show that sum and productExercise 1.31 are both special cases of a still more
general notion called accumulate that combines a collection of terms
using some general accumulation functions:
(accumulate combiner null-value term a next b)
acumulate take as arguments the same term and range
specifications as sum and product, together with a combiner
procedure (of two arguments) that specifies how the current term is
to be combined with the accumulation of the preceding terms
and a null-value that specifies what base value to use when
the terms run out. Write accumulate and show how sum and
product can both be defined as simple calls to accumulate.
This is the $33^{rd}$ exercise from Sicp. Here, we write an even more
general version of accumulate.
The Question
Exercise 1.33: You can obtain an even more general version of
accumulateExercise 1.32 by introducing the
notion of a filter on the terms to be combined. That is, combine
only those terms derived from values in the range that satisfy a
specified condition. The resulting filtered-accumulate abstraction
takes the same arguments as accumulate, together with an additional
predicate of one argument that specifies the filter. Write
fitered-accumulate as a proceedure. Show how to express the
following using filtered-accumulate:
This is the $35^{th}$ exercise from Sicp. Here, we calculate the golden ratio
($\varphi$) by finding the fixed point of a function.
The Question
Exercsie 1.35: Show that the golden ratio $\varphi$ (Section 1.2.2)
is a fixed point of the transformation $x \mapsto 1 + 1/x$ , and
use this fact to compute $\varphi$ by means of the fixed-point
procedure.
The Answer
This is simple. We already know that $\varphi$ is equal to $\frac{1 +
\sqrt{5}}{2}$. (Refer Exercise 1.13) A quick division gives us 1.6180 as the
representation in float.
This is the $36^{th}$ Exercise from Sicp. Here we modify fixed-point so that
it prints each guess. We then find a solution to $x^{x} = 1000$.
The Question
Exercise 1.36: Modify fixed-point so that it print the sequence of
approximations it generates, using the newline and display primitives shown
in Exercise 1.22. Then find a solution to $x^{x} = 1000$ by finding a fixed
point of $x \mapsto \log(1000)/\log(x)$. (Use scheme’s primitive logprocedure,
which computes natural logarithms.) Compare the number of steps this takes with
and without average damping. (Note that you cannot start fixed-point with a
guess of 1, as this would cause division by log(1) = 0)
This is the $37^{th}$ question in SICP. Here we find the value of a continous fraction.
The Question
Part A
An infinite continued fraction is an expression of the form
$$
f = \frac{N_{1}}{D_{1} + \frac{N_{2}}{D_{2} \frac{N_{3}}{D_{3} + …}}}
$$
As an example, one can show that the infinite continued fraction expansion with the $N_{i}$
and the $D_{i}$ all equal to 1 produces $1/\varphi$ where $\varphi$ is the golden ratio
(described in Section 1.2.2). One way to approximate an infinite continued fraction is to
truncate the expansion after a given number of terms. Such a truncation - a so-called
k-term finite continued fraction — has the form
This is the $38^{th}$ exercise of Sicp. Here we find the value of $e$
The Question
Exercise 1.38: In 1737, the Swiss mathematician Leonhard
Euler published a memoir De Fractionibus Continuis*, which
included a continued fraction expansion for $e − 2$, where
$e$ is the base of the natural logarithms. In this fraction, the
$N_{i}$ are all 1, and the $D_{i}$ are successively 1, 2, 1, 1, 4, 1, 1,
6, 1, 1, 8, . . .. Write a program that uses your cont-frac
procedure from Exercise 1.37 to approximate $e$, based on
Euler’s expansion.
where $x$ is in radians. Define a procedure (tan-cf x k) that computes an approximation
to the largest tangent function based on Lambert’s formula. k specifies the number of terms
to compute, as in Exercise 1.37.
Exercise 1.41: Define a procedure double that takes a procedure of one
argument as argument and returns a procedure that applies the original procedure
twice. For example, if inc is a procedure that adds 1 to its argument, then
(double inc) should be a procedure that adds 2. What value is returned by
(((double (double double)) inc) 5)
The Answer
double is very simple. So, if we give a procedure of inc, we would get (inc (inc x)). This outlines the following procedure:
Exercise 1.42: Let $f$ and $g$ be two one-argument functions. The
composition $f$ after $g$ is defined to be the function $x \mapsto f(g(x))$.
Define a procedure compose that implements composition. For example, if inc
is a procedure that adds 1 to its argument,
((compose square inc) 6)
49
The Answer
The above proposed compose is similar to the last exercise’s double.
However, rather than applying the same function twice, we apply 2 different
functions, the first after the second. This description gives us the following
function:
This is section describes of methods for checking whether
a number n is prime or not. One with order of growth $\theta(\sqrt[]{n})$
and another of growth $\theta(\log n)$.
The First method
A common way to check if a number is prime or not is by finding all
the factors of that number. Take the number 2. Its’ factors are 1
and 2. Thus it is prime. Or take 6. It can be expressed as
$\times[2][3]3$ or $\times[6][1]$. So all we need to do is find if it
has factors other than 1 and itself by incrementing from 2.
In this Example, we write a recursive procedure to calculate
the number of possible ways to tend change to a given value.
The Problem
Suppose you have a sum of 1 dollar. Let’s say you bought a
coffee for 90 cents, and the cafeteria needs to pay you
back 10 cents. How do find number of ways to the cafeteria
can pay you back your change? One such way is to manually
count the possible ways.
All the Procedures explained previously were much like
ordinary Mathematical functions. However in Mathematics,
we are usually concerned with declarative (what is) descriptions.
We soon reach issues when deriving procedure from their
Mathematical definition. Consider the definition of the
Square root:
$ \sqrt{x} = $ the y such that $ y \geq 0 $ and $ ^{2} = x $
The above is a perfectly legitimate mathematical function BUT doesn’t
tell use how to compute the sqrt.