• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 02:34
CEST 08:34
KST 15:34
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro4 Preview: On Course12Code S Season 1 - RO8 Preview7[ASL21] Ro8 Preview Pt2: Progenitors8Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun13[ASL21] Ro8 Preview Pt1: Inheritors16
Community News
Weekly Cups (May 4-10): Clem, MaxPax, herO win1Maestros of The Game 2 announcement and schedule !10Weekly Cups (April 27-May 4): Clem takes triple0RSL Revival: Season 5 - Qualifiers and Main Event12Code S Season 1 (2026) - RO12 Results1
StarCraft 2
General
MaNa leaves Team Liquid Weekly Cups (May 4-10): Clem, MaxPax, herO win Code S Season 1 - RO8 Preview Behind the Blue - Team Liquid History Book Weekly Cups (April 27-May 4): Clem takes triple
Tourneys
2026 GSL Season 2 Qualifiers $5,000 WardiTV Spring Championship 2026 Maestros of The Game 2 announcement and schedule ! SC2 INu's Battles#16 <BO.9> Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players
External Content
Mutation # 525 Wheel of Misfortune The PondCast: SC2 News & Results Mutation # 524 Death and Taxes Mutation # 523 Firewall
Brood War
General
(Spoiler) Interview ASL Ro4 Day 2 Winner Data needed Flashes ASL S21 Ro8 Review ASL Tickets to Live Event Finals? Pros React To: Leta vs Tulbo (ASL S21, Ro.8)
Tourneys
[ASL21] Semifinals B [Megathread] Daily Proleagues [ASL21] Semifinals A [BSL22] RO16 Group Stage - 02 - 10 May
Strategy
[G] Hydra ZvZ: An Introduction Simple Questions, Simple Answers Fighting Spirit mining rates Muta micro map competition
Other Games
General Games
Warcraft III: The Frozen Throne Stormgate/Frost Giant Megathread Nintendo Switch Thread Starcraft Tabletop Miniature Game PC Games Sales Thread
Dota 2
The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread UK Politics Mega-thread YouTube Thread European Politico-economics QA Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
How EEG Data Can Predict Gam…
TrAiDoS
ramps on octagon
StaticNine
Funny Nicknames
LUCKY_NOOB
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1156 users

alphaGo and style

Forum Index > BW General
Post a Reply
evanthebouncy!
Profile Blog Joined June 2006
United States12796 Posts
Last Edited: 2016-03-12 22:54:26
March 12 2016 22:53 GMT
#1
I would like to offer a story and a viewpoint on "style". It took me 2 hours to flesh it out and write, so hopefully you enjoy it.

I followed broodwar in my years in highschool until starcraft2 came out during my undergraduate. My first pro-game watched live over afreeca was the infamous 3 time bunker rush by Boxer vs Yellow and some of the last games I watched was where 5 hatch hydra zvp was becoming commonplace.

As we all know, in starcraft or any competitive video game there is this term called "meta". In short the meta(game) is the set of dominant styles of play at a given time.

To put it in concrete terms, take ZvP for example, earlier ZvP involves securing a 2nd base while fending off an early protoss pressure into a lurker/zergling containment.


Savior brought the 3 hatch mutalisks style that involves early simultanious mutalisks and lurker tech, and can use mutalisks to harass and threat to backstab to keep protoss at bay while using lurkers to secure a 3rd base and eventually a containment.


The 3 hatch muta was eventually trumped by Bisu which brought corsair / darktemplar (off of a fast forge expansion) strategy into the meta in this landmark MSL final game (I remember watching it and jumping with joy because I was a protoss player and protoss was absolutely dumpstered by zergs in those days)


The zerg eventually adapted into a stable 5 hatchery hydralisk style with a fast spire for scourges to limit early corsair counts.


I chose broodwar over say, sc2 as an example because unlike many games which are continuously tweaked and balanced, the last balance patch for broodwar was 1.08 which is set in 2001, and the game has not changed since. The reinvention of different styles was entirely of the players doing.

But that's enough on the story side.

The reason I want to discuss "style" rather than "strategy" is because strategy often has a connotation of optimality, whereas style is arbitrary. In the early heydays of Boxer, all he wanted to do is to micro his 5 marines and was extremely proficient at it. This style was proven to be suboptimal by his own disciple iloveoov that showed him if you have 50 marines maybe you don't need to micro at all.

Yet, it is impossible to conceive a human activity which we perform without a style. We were somehow able to come up with a set of almost arbitrary attributes (maximize efficiency of the first 10 marines in a dropship) that, once emphasized, give rise to an extremely potent local optimum that you would never able to find if you start your optimization without them.

It is also clear that, if a space can be searched efficiently / exhaustively, style cease to be relevant. As we witnessed alphaGo easily dispatched the top player, pulling out moves the equivalent of giving thousands of years of philosophies and aesthetics the middle finger.

In conclusion, rather than applauding the computer's ability to pull off an "inhuman" move as a triumph, it shows the limitations of AI. Coming from deepblue to alphaGo, not much has changed in their styles, they're both trying to exhaustively search the huge space as well as they can, and with better computers and more clever encodings (conv nn, deep rl). It's only a matter of time before our kimchi eating thinking machine lose to an enormous cluster on a diet of pure lightnings.

In conclusion (again), whatever style we observe in these game playing agent is a direct manifestation of the human efforts behind it. Only when the day come an agent plays a piece on the board, and when asked why replies "it looked like a bird" should we be genuinely worried about our existence.

Until then, keep being stylish.

(I can probably answer some questions on ai and deep reinforcement learning as I am fairly familiar with the field)

edit: damn my first post since forever. feels good to post as an ultralisk man
Life is run, it is dance, it is fast, passionate and BAM!, you dance and sing and booze while you can for now is the time and time is mine. Smile and laugh when still can for now is the time and soon you die!
sc2chronic
Profile Joined May 2012
United States777 Posts
March 12 2016 23:00 GMT
#2
i think alphago can beat starcraft in go
terrible, terrible, damage
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 23:26 GMT
#3
I would love to see AlphaGo play chess to see what style it develops as well as whether it can teach itself to become good enough to surpass the current top engines Stockfish and Komodo.
tec27
Profile Blog Joined June 2004
United States3702 Posts
March 13 2016 00:36 GMT
#4
On March 13 2016 07:53 evanthebouncy! wrote:
In conclusion, rather than applauding the computer's ability to pull off an "inhuman" move as a triumph, it shows the limitations of AI. Coming from deepblue to alphaGo, not much has changed in their styles, they're both trying to exhaustively search the huge space as well as they can, and with better computers and more clever encodings (conv nn, deep rl). It's only a matter of time before our kimchi eating thinking machine lose to an enormous cluster on a diet of pure lightnings.

I think this statement fits your narrative better than the actual facts of the situation do. If you had matched AlphaGo against a pro on the Internet without telling them it was a computer, I think the vast majority of them would have been quite, quite surprised to find that out after the match. Partly because they didn't expect to be beaten by a computer, but also because the plays it makes are not that inhuman.

You also overestimate the value of distributed computing here, the distributed version of AlphaGo only has a 70% winrate over the single machine version. This was not a victory for computing power, we've had plenty of more powerful collections of computing resources in the past. The novel part of AlphaGo, and the reason it is able to play more like a human can play, is the combination of the techniques it is using. Humans do the sort of optimization search you are talking about as well, but the difference (historically) has been that humans are better at what we might term "intuition" from past experience. That is, humans can synthesize complex pattern recognition into a very fast path in their mind. By doing this, humans are able to discard a vast number of possible moves as being completely irrelevant, and focus quickly on ones that are relevant to winning the game. Computers have decidedly not been very good at this sort of thing in the past, but AlphaGo is a demonstration that this might not be an eternal limitation.

As far as the "style" that you talk about, it seems to me that this comes from a couple places:

1) Patterns of plays that develop out of the way you learned the game. I see no particular reason why neural nets wouldn't also experience this effect. The data or methods that are used to train them necessarily have an effect on the way the network activates during usage, similarly to how human brains work.

2) Optimization for different sorts of values. Humans have a tendency to optimize for things that "look cool" in their own play. Given two options that perform seemingly the same in a given game, they'd probably go for the one that makes them feel happier. If this was a desirable trait of an AI, I see no particular reason why you wouldn't be able to train such a thing given a way to value its outputs. Valuing the outputs is seemingly the hard part there, but we already have tons of people working every day to figure out how to better target ads at people using forms of AI. Is using AI to determine how an ad will appeal to someone really all that different from figuring out if a certain play/maneuver will?

Relatedly, DeepMind has talked about future plans to operate the same AI setup as AlphaGo, but beginning from "pure" foundations; that is, beginning from random play with itself, rather than training on expert Go games. I think the outcome of that work will likely be very interesting here: will the AI develop a style all its own, vastly different from the way humans have ever played the game?

Humans don't operate all that differently from what AlphaGo is doing, they just have a system of values decided by evolution and a less controlled environment/upbringing. AlphaGo, however, is a step in the direction of being able to do more of that with AI; to control less of the upbringing, to require less human intervention in their training, to generalize the solution to more and more fields. While we are prone to thinking today that AIs play games in a very clean, rigid way, I think this is only indicative of the limited number of variables they can value, and the limits in our abilities to train them organically. I don't think this is likely to be a limit for too much longer.

Full disclosure: I work for Google, although nothing related to DeepMind/AI
Can you jam with the console cowboys in cyberspace?
Chocolate
Profile Blog Joined December 2010
United States2350 Posts
March 13 2016 00:44 GMT
#5
Alphago does not exhaustively search the set of possible moves from the current position. it is much more nuanced than that. A better way to conceive of how it works is by "pattern-finding". An example in starcraft: alphago will never even consider making very early missile turrets because it doesn't fit the general pattern of games that cause it to win.

We can probably expect alphago to come up with a pretty interesting style/meta during the portion where it plays games against itself to learn. It will probably start using a pretty standard meta (because it will look at a set of past games to learn from initially) but this could lead to it eventually playing a completely different meta after enough simulations against itself.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
Last Edited: 2016-03-13 01:34:44
March 13 2016 01:31 GMT
#6
On March 13 2016 08:26 DuckloadBlackra wrote:
I would love to see AlphaGo play chess to see what style it develops as well as whether it can teach itself to become good enough to surpass the current top engines Stockfish and Komodo.


This was already undertaken last year by Matthew Lai's chess project 'Giraffe' ( arxiv.org) . While its ELO was slightly lower than the two monsters aforementioned, it's a single-man project that was trained by self-play only ( as opposed to crazy handcrafted evaluation functions ) for a couple days on a single machine, so it's extraordinarily impressive. It is not impossible to believe that given enough time, the program would learn enough by itself to overtake them.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Excalibur_Z
Profile Joined October 2002
United States12246 Posts
March 13 2016 05:01 GMT
#7
It's a very curious question and we have to ask first some basic questions:
- What race will it choose? (if not assigned one)
- What maps will it play on? (assuming it will have to train on specific maps)

If AlphaGo's descendant plays against itself at an accelerated game speed in order to learn and adapt, then it will probably learn from the very beginning, with four workers and a townhall structure. Maybe its first few hundred games will involve attacking with its first four workers and not building anything at all. Eventually it will figure out how to build units and send those more effective fighting units in to win more easily. The learning process always has to start from the beginning of a match and gradually extend. At some point, it will figure out that 2 zerglings result in a win, then 4 zerglings, then 6, and then it will learn to have its workers fight back. It probably won't even know what Battlecruisers or Defilers are until hundreds of thousands, perhaps millions of simulations, because they're so far down the tech tree and games never get that far.

It also remains to be seen at what point it will learn unit control. It could be the very first thing it learns, before even building its first structure. There's only so much you can do at the start of a game, but one thing you can do is send your workers blindly around the map and give them an attack command on the enemy base and win. It probably won't find out why Marines are good for a very long time (because it would keep losing to rushes until it learns to micro or wall). I would guess that everything it does would be centered around the early game and nothing more for months, if not years.

The really interesting thing is that because it keeps playing against itself, then it could eventually stop using rushes and start teching/early expanding because it's capable of fending off a rush. What's interesting about that is since it could theoretically have perfect micro by that point, a rush could simply beat most human opponents outright, but it wouldn't use it because it's learned that rushing is no longer an effective strategy against itself.
Moderator
evanthebouncy!
Profile Blog Joined June 2006
United States12796 Posts
March 13 2016 05:22 GMT
#8
On March 13 2016 09:36 tec27 wrote:
...


Hey I remember you! long time no see.

You seem to have the opinion that the majority (if not all) of human reasoning can be explained by pattern recognition. Am I correct in saying that?

Assume that assumption is correct then yes, the nn architectures today may well capture human-like reasonings. However there are certain aspect of intelligence like simulation and forming hypothesis that cannot naturally arise from these networks unless we explicitly encode them in.

The fact that we can learn and infer a huge amount of information based on very few training examples makes human different from a network at the moment. Now we can make the argument that human is a well-initialized and pre-trained network that only need a few additional datas to fit to the current environment, maybe.

but all in all I think the current network models is yet insufficient to capture intellegence, or style, for that matter.
Life is run, it is dance, it is fast, passionate and BAM!, you dance and sing and booze while you can for now is the time and time is mine. Smile and laugh when still can for now is the time and soon you die!
tec27
Profile Blog Joined June 2004
United States3702 Posts
March 13 2016 06:27 GMT
#9
I think the defining difference of AlphaGo is based around emulating human "intuition", but I don't know that I would say it represents the vast majority of human reasoning. Humans do tend to make use of it a lot (often subconsciously), but I think the reason its important to note for this particular case is that it is clearly important for winning at Go.

I dunno, the question of whether or not such AIs represent intelligent beings seems to be a philosophical question. If humans program something, can it actually learn things? If we introduce random events and let these mutate the program, is that the same thing as human learning? Can such a program be creative? You could go either way there, but I think we'd both probably agree on the outputs such a program could produce. It's more a matter of how you define learning or creativity or intelligence, and what that encompasses.

I don't think there's a particularly great grasp on what makes humans intelligent yet, but I also think things like AlphaGo are interesting because they can start to push the boundaries there and make you think about it. As these get better at doing things humans are able to do, we can also use them to identify the parts that *are* unique to humans and make us intelligent.
Can you jam with the console cowboys in cyberspace?
evanthebouncy!
Profile Blog Joined June 2006
United States12796 Posts
March 13 2016 06:43 GMT
#10
right except I don't think alphaGo emulated intuition as much as we gave it credit for. It merely used a better state abstraction via convulutional nn. and after a billion gradient descent later was able to converge on some better-than-human policy.

Although I do agree in that it is very useful to push the boundaries. Once we have a good AI for playing starcraft I'd be more convinced because starcraft has more states still and the decision is far more complex than Go. The whole partially observable aspect makes it fun to think about.

Incidentally, do you know if anywhere (in google or out) do the power consumption of these machines are being kept? I would like to make an infographic on how much equivalent hamburgers are used to train and run these networks, if you can point me in the right direction
Life is run, it is dance, it is fast, passionate and BAM!, you dance and sing and booze while you can for now is the time and time is mine. Smile and laugh when still can for now is the time and soon you die!
letian
Profile Blog Joined May 2010
Germany4221 Posts
Last Edited: 2016-03-13 09:00:59
March 13 2016 09:00 GMT
#11
I thought highly speculative posts about alphago will never get out of SC2 section.
Network that learnt some patterns of some specific game is not an AI it is a joke.
But you can sell this joke to ppl for the sake of fun.
BisuDagger
Profile Blog Joined October 2009
Bisutopia19351 Posts
March 13 2016 10:23 GMT
#12
On March 13 2016 14:01 Excalibur_Z wrote:
It's a very curious question and we have to ask first some basic questions:
- What race will it choose? (if not assigned one)
- What maps will it play on? (assuming it will have to train on specific maps)

If AlphaGo's descendant plays against itself at an accelerated game speed in order to learn and adapt, then it will probably learn from the very beginning, with four workers and a townhall structure. Maybe its first few hundred games will involve attacking with its first four workers and not building anything at all. Eventually it will figure out how to build units and send those more effective fighting units in to win more easily. The learning process always has to start from the beginning of a match and gradually extend. At some point, it will figure out that 2 zerglings result in a win, then 4 zerglings, then 6, and then it will learn to have its workers fight back. It probably won't even know what Battlecruisers or Defilers are until hundreds of thousands, perhaps millions of simulations, because they're so far down the tech tree and games never get that far.

It also remains to be seen at what point it will learn unit control. It could be the very first thing it learns, before even building its first structure. There's only so much you can do at the start of a game, but one thing you can do is send your workers blindly around the map and give them an attack command on the enemy base and win. It probably won't find out why Marines are good for a very long time (because it would keep losing to rushes until it learns to micro or wall). I would guess that everything it does would be centered around the early game and nothing more for months, if not years.

The really interesting thing is that because it keeps playing against itself, then it could eventually stop using rushes and start teching/early expanding because it's capable of fending off a rush. What's interesting about that is since it could theoretically have perfect micro by that point, a rush could simply beat most human opponents outright, but it wouldn't use it because it's learned that rushing is no longer an effective strategy against itself.

The maps question is one I wonder about. Let's say it trains on fighting spirit until it has optimal scouting patterns. What happens if flash or boxer float their CC to another mineral patch at the start of the game. Does it keep searching? Does it send more scouts? Same for hiding buildings. How creative can Alpha get. Will it ever think to 2rax in the middle of the map?
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
evanthebouncy!
Profile Blog Joined June 2006
United States12796 Posts
March 13 2016 20:24 GMT
#13
On March 13 2016 19:23 BisuDagger wrote:
Show nested quote +
On March 13 2016 14:01 Excalibur_Z wrote:
It's a very curious question and we have to ask first some basic questions:
- What race will it choose? (if not assigned one)
- What maps will it play on? (assuming it will have to train on specific maps)

If AlphaGo's descendant plays against itself at an accelerated game speed in order to learn and adapt, then it will probably learn from the very beginning, with four workers and a townhall structure. Maybe its first few hundred games will involve attacking with its first four workers and not building anything at all. Eventually it will figure out how to build units and send those more effective fighting units in to win more easily. The learning process always has to start from the beginning of a match and gradually extend. At some point, it will figure out that 2 zerglings result in a win, then 4 zerglings, then 6, and then it will learn to have its workers fight back. It probably won't even know what Battlecruisers or Defilers are until hundreds of thousands, perhaps millions of simulations, because they're so far down the tech tree and games never get that far.

It also remains to be seen at what point it will learn unit control. It could be the very first thing it learns, before even building its first structure. There's only so much you can do at the start of a game, but one thing you can do is send your workers blindly around the map and give them an attack command on the enemy base and win. It probably won't find out why Marines are good for a very long time (because it would keep losing to rushes until it learns to micro or wall). I would guess that everything it does would be centered around the early game and nothing more for months, if not years.

The really interesting thing is that because it keeps playing against itself, then it could eventually stop using rushes and start teching/early expanding because it's capable of fending off a rush. What's interesting about that is since it could theoretically have perfect micro by that point, a rush could simply beat most human opponents outright, but it wouldn't use it because it's learned that rushing is no longer an effective strategy against itself.

The maps question is one I wonder about. Let's say it trains on fighting spirit until it has optimal scouting patterns. What happens if flash or boxer float their CC to another mineral patch at the start of the game. Does it keep searching? Does it send more scouts? Same for hiding buildings. How creative can Alpha get. Will it ever think to 2rax in the middle of the map?


well see that we don't know. but ideally the ai would have a more abstract notion of scouting defined as: keep searching if enemies are not found yet. So it will keep going until it gets the vision instead of blindly do a pattern once and forgets.
Life is run, it is dance, it is fast, passionate and BAM!, you dance and sing and booze while you can for now is the time and time is mine. Smile and laugh when still can for now is the time and soon you die!
Please log in or register to reply.
Live Events Refresh
Next event in 2h 26m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Nina 123
ProTech96
StarCraft: Brood War
GuemChi 5425
NaDa 18
Bale 11
ZergMaN 10
Icarus 6
Pusan 0
Dota 2
NeuroSwarm146
League of Legends
summit1g7893
Counter-Strike
Coldzera 1566
Stewie2K478
Other Games
C9.Mang0401
monkeys_forever358
Mew2King357
ceh9241
Sick182
Organizations
Counter-Strike
PGL13211
Other Games
gamesdonequick415
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 13 non-featured ]
StarCraft 2
• Berry_CruncH304
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Lourlo1509
• Stunt427
Upcoming Events
Replay Cast
2h 26m
Replay Cast
17h 26m
The PondCast
1d 3h
OSC
1d 3h
Replay Cast
1d 17h
RSL Revival
2 days
OSC
2 days
Korean StarCraft League
2 days
RSL Revival
3 days
BSL
3 days
[ Show More ]
GSL
4 days
Cure vs herO
SHIN vs Maru
BSL
4 days
Replay Cast
5 days
Replay Cast
5 days
The PondCast
6 days
Liquipedia Results

Completed

Proleague 2026-05-12
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
IPSL Spring 2026
KCM Race Survival 2026 Season 2
Acropolis #4
KK 2v2 League Season 1
BSL 22 Non-Korean Championship
SCTL 2026 Spring
RSL Revival: Season 5
2026 GSL S1
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2

Upcoming

Escore Tournament S2: W7
YSL S3
Escore Tournament S2: W8
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
WardiTV Spring 2026
2026 GSL S2
BLAST Bounty Summer 2026: Closed Qualifier
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.