
Lecture 9 - Management Science
Restaurant Manager’s Crisis:
“I need to schedule my 18 servers across 6 shifts this weekend. Shifts have different lengths (4-6 hours), and if I don’t have enough experienced servers on busy shifts, we face penalties per missing experienced server from our parent company!”
A restaurant facing a weekend scheduling crisis:
La Étoile’s Problem:
Question: How to balance labor costs, penalties, AND staff?
The financial stakes are significant with these large penalties:
Potentially up to large difference between good and bad scheduling!
The real-world complexity we’re dealing with:

With varying shifts, preferences, and penalties, this is will be a real challenge!
What you’ll understand after this lecture:
Remember the metaphor with blindfolded eyes from last lecture?
This metaphor will guide us through all metaheuristics today!
Real problems often have thousands of local optima!

Question: Any idea how to escape local optima?
Why neighborhood optimization fails:
Technical View: Local Optima
Analogy: Department Silos
Sum of local bests ≠ Global best
Greedy algorithms can simply trap themselves:

Greedy allocates resources early, creating problems later!
Because we only ever accept better solutions during search:

Question: What can we do to cope with this situation?
The fundamental components:
The Strategy
Think of it as strategic risk-taking that decreases over time!
How annealing steel inspired an optimization algorithm:
Annealing Metal:
Optimization:
The willingness to temporarily accept worse solutions is what enables finding the summit!
Probability of accepting worse solutions lowers with temp:

We essentially compare the cost of the new schedule to the current cost and decide whether to accept the change based on the temperature and the difference in cost.
How Simulated Annealing Works (Pseudocode)
def simulated_annealing_concept(current_schedule):
temperature = 500 # Start "hot" (adventurous)
best_schedule = current_schedule
while temperature > 1:
# Step 1: Try a random change (like swapping two shifts)
new_schedule = make_random_change(current_schedule)
# Step 2: Is it better?
if cost(new_schedule) < cost(current_schedule):
current_schedule = new_schedule # Always accept improvements
else:
# NEW: Sometimes accept worse solutions!
# Hot temperature = more likely to accept
# Cold temperature = less likely to accept
if random() < acceptance_probability(temperature):
current_schedule = new_schedule # Accept anyway!
# Step 3: Cool down (become less adventurous)
temperature = temperature * 0.95
# Remember the best we've ever seen
if cost(current_schedule) < cost(best_schedule):
best_schedule = current_schedule
return best_scheduleA simplified weekend scheduling problem we’ll use throughout:
The initial greedy schedule has the following results:
Greedy Schedule Cost: €5,240
Labor: €2250, Penalties: €1700, Unhappiness: €1290
Let’s see how Simulated Annealing can improve the solution!
How temperature affects the search behavior:

See how SA accepts worse solutions early, enabling escape from local optima!
Avoid these common implementation errors:
Mistake #1: Starting Too Cold
Mistake #2: Cooling Too Quickly
Quick cooling is tempting for speed, but defeats the purpose of SA!

The “Good Balance” explores widely early, then refines carefully. Often you need to balance exploration and exploitation by experimenting with different parameters.
How natural selection inspires computational optimization:
Natural Selection:
Optimization:
Just like successful products get more market share, better solutions get more “offspring” in the next generation. It’s survival of the fittest, but for schedules, routes, or designs!
Four stages repeat each generation:
Let’s see each stage in detail with our restaurant problem!
How to choose which schedules get to “reproduce”:

Each tournament selects one parent, then we pair them up sequentially for crossover.
Combine two parent schedules to create offspring:

Crossover randomly combines good building blocks from both parents!
Random changes maintain diversity and explore new solutions:

Mutation adds random exploration, like trying something completely new occasionally!
How do offspring join the population?
Our approach: Generational with Elitism, we create 20 offspring via repeated selection/crossover/mutation, but preserve the 2 best from current generation.

Elitism ensures we never lose our best solutions while exploring new ones!
How the population improves over generations:

Notice how population average also improves (not just the best)!
Comparing exploration strategies on the restaurant problem:

GA maintains population diversity, SA explores single solution path!
Avoid these population-related errors:
Mistake #1: Everyone Becomes Identical
Mistake #2: Too Greedy in Selection
Technical pitfalls to watch out for:
Mistake #3: Breaking the Rules
Mistake #4: Evolution Too Slow
Using memory to avoid cycling through bad solutions:
Analogy:
In Optimization:
Like keeping “lessons learned”, you remember not to use them again, but after a while, you might reconsider!
How Tabu Search Works (Pseudocode)
def tabu_search_concept():
tabu_list = [] # Our "never again" list
current_solution = initial_schedule
best_solution = current_solution
while not done:
# Look at all possible moves
possible_moves = get_all_neighbor_moves(current_solution)
# Filter out the "forbidden" moves
allowed_moves = []
for move in possible_moves:
if move not in tabu_list: # Not forbidden
allowed_moves.append(move)
# Pick the best allowed move (even if worse!)
best_move = select_best(allowed_moves)
current_solution = apply(best_move)
# Update best if improved
if cost(current_solution) < cost(best_solution):
best_solution = current_solution
# Remember this move (add to tabu list)
tabu_list.append(best_move)
if len(tabu_list) > 10: # Keep list size manageable
tabu_list.pop(0) # Forget oldest
return best_solutionReal implementation with memory-based exploration:

Tabu Search’s memory prevents revisiting bad solutions!
Collective intelligence through chemical signals:
Reviews:
In Optimization:
Imagine each server-shift pairing has a “rating” that increases when it works well in a schedule. Over time, the best pairings naturally get chosen more often!
Four key stages in each iteration:
Let’s see each stage visually!
Two critical parameters control the balance:
Evaporation Rate (ρ)
Number of Ants
Start with ρ=0.3 and n_ants=20, then tune based on problem size.
Ants don’t pick randomly, they follow the chemical trails

To build the initial pheromone matrix, each cell is initialized with a small positive value.
After all ants build schedules:

Evaporation prevents premature convergence as old patterns can fade away!
Good ants deposit more pheromones:

The best solutions leave the strongest trails for future iterations!
Full ACO implementation on restaurant staffing:

The colony learns collectively step by step.
How does the ant colony compare?

Any idea why ACO fares worse?
The Right Tool for the Wrong Job
ACO is designed for Sequential Path-Finding (Graph Traversal).
We forced a “Graph” algorithm onto a “Bin Packing” problem. SA and GA don’t care about geometry, so they adapted better!
A decision guide for algorithm selection:
| Method | Time | Quality | Complexity | Best For |
|---|---|---|---|---|
| Random | xxxx | x | Trivial | Baseline |
| Greedy | xxx | xx | Simple | Quick decisions |
| LS | xx | xxx | Medium | Improvement |
| SA | xx | xxxx | Medium | Single solution |
| GA | x | xxxx | High | Population |
| TS | xx | xxx | Medium | Avoid cycles |
| ACO | x | xxxx | High | Changing Paths |
Why there’s no universal best algorithm:
“No Free Lunch Theorem”: No single algorithm is best for all problems. Your choice must match your problem structure:
Guidelines for successful implementation:
Hour 2: This Lecture
Hour 3: Notebook
Hour 4: Competition
La Étoile Restaurant Weekend Staffing
Choose your metaheuristic wisely as this is a tough problem!
How to find good parameters without wasting time:
temps = [100, 500], cooling = [0.95, 0.99]Technical realities when putting metaheuristics into production:
| Factor | Questions to Ask |
|---|---|
| Time Budget | How long can optimization run? |
| Solution Quality | Need optimal or “good enough”? |
| Explainability | Must justify decisions? |
| Problem Changes | Static or dynamic data? |
| Team Skills | Who maintains this code? |
Use the simplest method that meets your quality target. Complex metaheuristics are great but more costly to maintain!
Key Takeaways:
Take 20 minutes, then we start the practice notebook
Next up: You’ll implement metaheuristics for Bean Counter
Introduction to Metaheuristics | Dr. Tobias Vlćek | Home