Wikinews interviews Mario J. Lucero and Isabel Ruiz of Heaven Sent Gaming

Friday, November 7, 2014Albuquerque, New Mexico —Online entertainment is a booming market, and plenty of players are making their play; back in March of this year The Walt Disney Company bought the multi-channel network Maker Studios. What is web entertainment, and the arts therein? And, who are the people venturing into this field? Wikinews interviewed Mario Lucero and Isabel Ruiz, the founders of Heaven Sent Gaming, a small entertainment team. This group has been responsible for several publications, within several different media formats; one successful example was aywv, a gaming news website, which was #1 in Gaming on YouTube in 2009, from September to November; Heaven Sent Gaming was also the subject of a referential book, released in 2014, entitled Internet Legends – Heaven Sent Gaming.

Retrieved from “https://en.wikinews.org/w/index.php?title=Wikinews_interviews_Mario_J._Lucero_and_Isabel_Ruiz_of_Heaven_Sent_Gaming&oldid=3060362”

Posted on January 20th, 2023 by  |  No Comments »

Recalled pet food found to contain rat poison

Friday, March 23, 2007

In a press release earlier today, New York State Agriculture Commissioner Patrick Hooker, along with Dean of Cornell University’s College of Veterinary Medicine Donald F. Smith, confirmed that scientists at the New York State Food Laboratory identified Aminopterin as a toxin present in cat food samples from Menu Foods.

Menu Foods is the manufacturer of several brands of cat and dog food subject to a March 16, 2007 recall.

Aminopterin is a drug used in chemotherapy for its immunosuppressive properties and, in some areas outside the US, as a rat poison. Earlier reports stated that wheat gluten was a factor being investigated, and officials now state that the toxin would have come from Chinese wheat used in the pet food, where it is used for pest control. Investigators will not say that this is the only contaminant found in the recalled food, but knowing the identity of the toxin should assist veterinarians treating affected animals.

The Food Laboratory tested samples of cat food received from a toxicologist at the New York State Animal Health Diagnostic Center at Cornell University. The samples were found to contain the rodenticide at levels of at least 40 parts per million.

Commissioner Hooker stated, “We are pleased that the expertise of our New York State Food Laboratory was able to contribute to identifying the agent that caused numerous illnesses and deaths in dogs and cats across the nation.”

The press release suggests Aminopterin, a derivative of folic acid, can cause cancer and birth defects in humans and can cause kidney damage in dogs and cats. Aminopterin is not permitted for use in the United States.

The New York State Food Laboratory is part of the Federal Food Emergency Response Network (FERN) and as such, is capable of running a number of unique poison/toxin tests on food, including the test that identified Aminopterin.

Retrieved from “https://en.wikinews.org/w/index.php?title=Recalled_pet_food_found_to_contain_rat_poison&oldid=4698300”

Posted on January 20th, 2023 by  |  No Comments »

Clinical study supports pollen tablets as hay fever treatment

Friday, September 1, 2006

A large clinical trial found that grass pollen tablets under the tongue (sublingual) are an effective treatment option for hay fever sufferers. The results have been published in this month’s edition of the Journal of Allergy and Clinical Immunology.

Tablets with grass pollen that dissolve quickly under the tongue may provide doctors with a better tool to fight the illness. The patient could take the pills at his own home, and after a while the symptoms may disappear completely and treatment can be ceased. Contrary to the current standard treatment, this therapy targets the immune system in such a way that it treats the cause and not just the symptoms.

634 patients from 8 countries participated in the trial. Half received Grazax® tablets (a product of ALK-Abello, Hørsholm, Denmark), the other half placebo pills. The group receiving treatment showed fewer symptoms during hay fever season and consequently cut down on their medication use. Grazax® also caused few and tolerable side effects such as itching and swelling in the mouth.

Combined with several earlier papers, these results provide a basis for ALK-Abello to apply for European approval of their drug. It has already been approved in Sweden, and the company will follow a Mutual Recognition Procedure hoping to get their product in European pharmacies by the end of 2006. The company will continue to fund further inquiries into Grazax® and will keep collaborating with scientists from several European countries.

During hay fever season, patients develop a runny nose, swelling of the mucous membranes of the nose, sneezing and teary eyes, a condition doctors call allergic rhinoconjunctivitis. The condition seems to be increasingly frequent, with approximately 10% of the population suffering from grass pollen allergy. First-line treatment includes antihistaminics (like cetirizine or Zyrtec®/Zyrlex®) and steroid sprays, but results are often disappointing.

Long term results have been demonstrated for weekly shots with pollen extracts (a procedure called desensitization), but too many visits to the doctor make the process troublesome. Also, the patient has to wait some time before leaving the doctor’s office, due to the risk of developing a kind of shock state termed anaphylactic shock.

Retrieved from “https://en.wikinews.org/w/index.php?title=Clinical_study_supports_pollen_tablets_as_hay_fever_treatment&oldid=4379363”

Posted on January 19th, 2023 by  |  No Comments »

In the land of the open source elves: Interview with “Battle for Wesnoth” creator David White

Thursday, June 1, 2006

If you’ve always wanted to live in a world populated by elves, dwarves and wizards, you don’t need to pay for a World of Warcraft subscription or buy the Special Extended DVD Trilogy Edition of The Lord of the Rings just yet. You could instead give Battle for Wesnoth a try — an open source turn-based strategy game in a fantasy setting. For the practically minded, “open-source” means that the code which the game is made of is available to anyone who wishes to use, redistribute or change it. It was created by volunteers and can be freely shared. Even the multiplayer online part of the game is free (no ads or spyware either).

But Wesnoth, as it is often abbreviated, is notable not only because it is free. While its graphics are simple by modern standards, the sheer number of units and scenarios that are available for the game is staggering. This is where the “open source” philosophy comes truly into play: anyone can contribute art or new campaigns. As of May 2006, the forum where users can share and discuss their own art contained over 25,000 messages. Most of this art is made available under the same open source terms as the game itself.

Battle for Wesnoth lets you command armies of units such as archers, swordmen, mages and gryphons during the course of a campaign consisting of multiple missions. Typically, your mission is to defeat an enemy leader, but some scenarios let you liberate a prisoner, find a lost artifact, traverse dangerous territories, and so on. Your best units can be taken from one mission to the next, “levelling up” in the process. Even units of the same type vary in their abilities, making the tactical use of the right unit at the right time very important.

The game is reminiscent of turn-based strategy classics such as Heroes of Might and Magic or Warlords. Throughout each campaign, the player is informed of the progress of the story. For instance, in the “Heir to the Throne” campaign, the player follows the story of Prince Konrad, who must reclaim the throne of Wesnoth from an evil queen.

The game was originally designed by David White, who is still the project’s lead developer. We exchanged e-mails with David about the state of open source gaming, the future of Wesnoth, and the collaborative aspects of game development.

David, thanks for taking our questions. Open source games suffer from the problem that very few people have all the abilities needed to make a good game: programming, graphics, story development, sound effects, music, and so on. When you started Battle for Wesnoth, how did you deal with this?

Not very well.  🙂

Version 0.1 of Wesnoth was developed entirely by me, and it was ugly. It had awful graphics, and no sound or music at all.

I think the best way to deal with the problem is to make an early version of the game which showcases the desired gameplay. Then, people with the appropriate skills who like the game will contribute. This worked out well with Wesnoth, anyhow, as I soon attracted a fine artist, Francisco Munoz, and once the graphics were decent, more people started wanting to help.

I noticed that the forum allows anyone to submit art for the game. How important have contributions from ordinary players been for development?

Well, as with almost any free software project, contributions from users have been very important. In the area of art, this is definitely so, though making a substantial contribution of art generally requires a reasonable amount of skill, so the number of people who can contribute art is somewhat limited.

This has meant that the number of people who contribute art is much smaller than, say, the number of people who contribute bug reports or feature requests. Still, there are plenty of good pixel artists out there, and we have had many good contributions from our community.

Also, within the game itself, it’s possible to directly download new campaigns from the Internet, many of which have been created by players. Do you think that, in essence, we are seeing the beginnings of applying “wiki” principles to game development?

On one hand, I see the ability to directly download new campaigns as a mild convenience — it wouldn’t be much more difficult for the user to, for instance, go to a web page and download campaigns.

On the other hand, it does blur the line between ‘developer created content’ and ‘user created content’ and so, like a Wiki, makes it much easier for any user to contribute to the game.

I think that for an Open Source game, making it as easy as possible for users to contribute content is a key way to help make the game succeed. We have tried hard to do this in Wesnoth. I don’t think that with something dynamic like a game, it’s quite as easy to make absolutely anyone be able to edit it or contribute as easily as they can in a Wiki, but we have tried to make it as easy as possible.

How do you moderate user-submitted content? Are there scenarios or graphics you have rejected because they crossed a line — sexual content, excessive violence, etc.?

Well, there are basically three levels of content acceptance:

  1. ‘Official’: content can be accepted into the game itself — the content will reside in our SVN repository, and will be in the tarballs released by developers.
  2. ‘Campaign Server’: Content can be allowed on the campaign server (the server which users can connect to in-game to download more content).
  3. ‘Disallowed’: Finally, content can be disallowed on the campaign server, which means that the creator could only distribute it using their own channels (for instance, having a web site people could download it from).

Content only makes it to (1) if the developers happen to like it very much. We don’t have any firm rules as to what is allowed and disallowed, and a campaign that has short-comings from the developer’s point of view might still be allowed if it is exceptional in other areas. As an example of this, the campaign ‘Under the Burning Suns’ contained explicit references to religion. To avoid controversy, we wanted to avoid references to religion in Wesnoth. However, recognizing the exceptional quality of the campaign, we decided to accept it into the official version of Wesnoth in spite of this one aspect we didn’t like.

Artwork containing nudity has also been a controversial point in the past, as has violence (particularly explicit depiction of blood). We generally take the point of view that we will review each item as it comes, rather than making blanket rules.

With regard to whether we allow things onto the campaign server, (2), our general policy is that to be allowed onto the campaign server, the content need only be licensed under the GPL. However, we reserve the right to remove content that we consider to be distasteful in any way. Fortunately, our content submitters are generally very reasonable, and we haven’t had to exercise this right.

Our aim is to keep Wesnoth appropriate for users of any age and background — of course, it contains some level of violence, but this is not depicted very explicitly, and only parents who do not want to expose their children to animated violence of any level need be concerned. For this reason, we also do not allow expletives on our forums or IRC channels.

How do you feel about games like “Second Life”, where players trade user-generated content for money?

I’ve never understood the appeal of games like that. I don’t enjoy cheating in games, and to me buying items with real money seems like cheating — except worse, since it actually costs money.

What changes to the game or gameplay do you anticipate in the coming months and years?

Well, we’ve avoided making many gameplay changes at all, since very early on in Wesnoth’s development. Wesnoth is meant to be a simple game, with simple gameplay, and ‘changing’ gameplay will probably lead to it being more complex. We want to keep it simple.

Changes will probably focus on improving existing features, and making the engine a little more customizable. Enhancing the multiplayer component is big on the list — we’ve progressively added more and more features on the server. We also want to add more graphical enhancement. For instance, a particle system to allow various combat effects.

If you had unlimited resources at your disposal to improve the game, what would you change about it?

Wesnoth was always designed to be a simple game, with simple goals. It has exceeded all the expectations I originally had for it. There is still some ‘polishing’ work going on, but really I don’t think there is too much I would dramatically change.

Probably the largest thing I can name is a much better AI than we currently have. I’m pretty happy with the AI developed for Wesnoth — I think it’s much better than AIs for most commercial games — but it could be better. That’s the only area of Wesnoth that I think could really be very dramatically improved.

I am pretty happy with our in-game graphics. Some people compare our graphics to modern commercial games, and think our graphics are laughably poor. We often get comments that our graphics are around the same quality as those seen in SNES or Genesis games, or PC games from a decade ago. (These people should try playing a strategy game on the SNES/Genesis/PC from this long ago; Wesnoth’s graphics are much better).

I am very happy with our graphics overall. I think our artists have done an excellent job of making the game look attractive without detracting from functionality. Adding 3D graphics, or changing the style of the 2D graphics would only be wasted effort in my mind — I think we’ve achieved a great balance of making the game easy and clear, while making it look good.

With unlimited resources, I would like some more storyline/cutscene images, and a nice new title screen, but these are relatively small concerns I think.

There are some enhancements to multiplayer I would like added — multiplayer campaigns is a long-time feature request. As are more options and features on the multiplayer server.

Overall though, if I had ‘unlimited resources’, I’d much rather develop an entirely new game. We don’t have enough good Open Source games — it’s a waste to pour all the resources we have into one. 🙂

Wesnoth has dwarves with guns, World of Warcraft has gnomes and goblins with explosives and flying machines — where do you, personally, define the limits of the fantasy genre? Are there scenarios playing in a steampunk world, or ones with modern technology? Would you allow those?

Actually we have Dwarves with ‘Thundersticks’ 🙂 — mysterious weapons that make a loud sound and do lots of damage, but are clumsy and unreliable. The developers do not comment on whether or not these ‘thundersticks’ are or are not like ‘guns’ on earth. We like to keep Wesnoth slightly mysterious, and leave some things up to the player’s interpretation, rather than spell it out.

We once used to have dragoons with pistols, and other weapons like that, but we made a very intentional decision to remove them.

I don’t like categorizing things into ‘genres’. Many people debate whether Wesnoth is an ‘RPG’, or ‘strategy game’, etc. I think the debate of what genre something is in is largely irrelevant.

We do have a vision for what the world of Wesnoth is like though — and Wesnoth is a world of ancient-era weaponry, with a little magic. Of Elves and Dwarves and Orcs. Very much inspired by Tolkien. I actually originally chose this setting because my focus was on technical excellence — writing a good, solid engine — not on creating a new fantasy world. I decided to stick with a very well-known, proven theme, figuring I couldn’t go wrong with it.

We probably wouldn’t allow anything that departs dramatically from the world we’ve made into the official version of the game, but we’d be happy to have it on our campaign server. The main attempt at a ‘total modification’ of Wesnoth is a project known as Spacenoth, which has a sci-fi/futuristic theme.

At this time though, there is no release of this project. I hope they do well though.

How do you feel about turn-based games like “Heroes of Might and Magic” with their massive army-building and resource management? Do you think there’s going to be an open source equivalent of this type of game soon?

I haven’t played Heroes of Might and Magic very much. The few times I have played it, I thought it was boring to be honest. I don’t like the type of game where one marches armies around a ‘large map’ and then must ‘zoom in’ to a different ‘battle field’ every time a battle takes place. I find games like that to take far too long, and tend to become tedious.

I would prefer a civilization or perhaps colonization type game. FreeCiv is nice, though it’s close to being a clone of Civilization II. I’d like an original game that had the same sort of theme as civilization, but with new and innovative rules.

Every online game and community is also a social space. Have you met interesting people through Wesnoth whom you would not have met otherwise? Are there other stories you can tell from the community — have there been real world meetups, chat rooms, etc.?

I’ve come into contact with lots of very interesting people through Wesnoth, and have learned a great deal from them. The Wesnoth developers — many of whom are from Europe — used the LSM conference in France in 2004 as an opportunity to meet each other. Nekeme, an organization dedicated to developing and promotion Free games was kind enough to sponsor two developers to go. Unfortunately, I was not able to attend, but the developers who did had a very nice time.

We have several IRC channels on irc.freenode.net, and the most popular ones — #wesnoth and #wesnoth-dev are usually fairly busy with both discussion about Wesnoth, and friendly discussion of other topics.

Also, the developers have tried to make a habit of playing “co-operative multiplayer” games against the AI. During these games, we use the in-game chat facility to get to know each other better, and discuss improvements to the game.

Are there other open source games that have personally impressed you, or that you enjoy playing?

I’m afraid I haven’t played many. I like RPGs, and I know lots of people love NetHack and similar games, but I much prefer party-based and generally more storyline-oriented RPGs.

FreeCiv is pretty well-done, though I am happy to play commercial games, and so I think Civilization 3 and Civilization 4 are both technically superior in virtually every regard. I think that’s an inevitable problem when you make an Open Source game a straight clone of a commercial game.

Probably the most promising Open Source game I’ve seen is GalaxyMage, but it still has a long way to go.

Honestly, I don’t play that many games. I like playing commercial RPGs, usually console-based, with my wife, and I occasionally like playing the commercial Civilization series. To play an Open Source game, it’d have to be very good, and appeal to my tastes, and I haven’t found any Open Source games like that, sadly.

Retrieved from “https://en.wikinews.org/w/index.php?title=In_the_land_of_the_open_source_elves:_Interview_with_%22Battle_for_Wesnoth%22_creator_David_White&oldid=4567684”

Posted on January 19th, 2023 by  |  No Comments »

Cape Verde to launch first public university, with Brazil’s support

Saturday, September 3, 2005

To help launch Cape Verde‘s first public university, the Brazilian government has made the promise to provide organizational advice and to train professors.

According to Fernando Haddad, Brazilian Minister of Education, Brazil will send its professors to the small nation off the African coast to train them in lecturing, marking, and other important skills involved in the profession.

Filomena Martins, Cape Verdean Minister of Education and Human Resource Enhancement, met with Haddad on August 22 to work out the details of the plan, which was first launched by Brazil’s President, Luiz Inácio Lula da Silva, in July 2004.

Cape Verde also hopes to increase the level of education for the university professors themselves. Of the 300 professors at the planned institution, only 3% have doctorates and only 21% have master’s degrees.

Brazil also helps Cape Verde reduce its debt through the Post-Graduate Student Agreement Program. PEC-PG encourages Cape Verdeans to become foreign exchange students in Brazil. In 2005-06, 20 students will participate, with the level increasing to 50 students in the next calendar year.

Retrieved from “https://en.wikinews.org/w/index.php?title=Cape_Verde_to_launch_first_public_university,_with_Brazil%27s_support&oldid=1110242”

Posted on January 19th, 2023 by  |  No Comments »

FBI begins widespread financial probe of 26 firms

Thursday, September 25, 2008

The FBI is investigating 26 firms and 1,400 individuals involved in the US financial crisis for fraud and “sub-prime lending practices”. Freddie Mac, Lehman Brothers, Fannie Mae and AIG are among the firms being scrutinized after recently receiving federal bailouts.

Investigators, who are cooperating with the IRS, Postal Office, and other federal offices to complete their investigation, are concerned that major corporations may have also forced or bribed ratings agencies to favor them.

The probe, which is in early stages, began eight months ago when the FBI began taking a close look at the mortgage industry and widespread irresponsible loaning practices. At least one corporation has been raided, but so far no evidence of fraud has been found.

The FBI has questioned executives of each of the firms closely, and arrested two in June. An anonymous source told The Times that the firms had been ordered to “hold all papers and e-mails under lock and key” as the FBI scours the finances of each firm.

Many of the companies and individuals being investigated are at the center of the nationwide financial crisis and controversial bailout plans, and have been widely blamed for the country’s financial crisis. The investigation has come at a time when the eyes of many in the US and around the world are turned towards the financial markets, as Congress and politicians scramble to fix the crisis while the election date looms closer.

Officials told CNN that it would be a long time before the investigations were finished, adding a warning: “Don’t expect indictments tomorrow or next week or next month”.

Retrieved from “https://en.wikinews.org/w/index.php?title=FBI_begins_widespread_financial_probe_of_26_firms&oldid=1979376”

Posted on January 18th, 2023 by  |  No Comments »

Gastric bypass surgery performed by remote control

Sunday, August 21, 2005

A robotic system at Stanford Medical Center was used to perform a laparoscopic gastric bypass surgery successfully with a theoretically similar rate of complications to that seen in standard operations. However, as there were only 10 people in the experimental group (and another 10 in the control group), this is not a statistically significant sample.

If this surgical procedure is as successful in large-scale studies, it may lead the way for the use of robotic surgery in even more delicate procedures, such as heart surgery. Note that this is not a fully automated system, as a human doctor controls the operation via remote control. Laparoscopic gastric bypass surgery is a treatment for obesity.

There were concerns that doctors, in the future, might only be trained in the remote control procedure. Ronald G. Latimer, M.D., of Santa Barbara, CA, warned “The fact that surgeons may have to open the patient or might actually need to revert to standard laparoscopic techniques demands that this basic training be a requirement before a robot is purchased. Robots do malfunction, so a backup system is imperative. We should not be seduced to buy this instrument to train surgeons if they are not able to do the primary operations themselves.”

There are precedents for just such a problem occurring. A previous “new technology”, the electrocardiogram (ECG), has lead to a lack of basic education on the older technology, the stethoscope. As a result, many heart conditions now go undiagnosed, especially in children and others who rarely undergo an ECG procedure.

Retrieved from “https://en.wikinews.org/w/index.php?title=Gastric_bypass_surgery_performed_by_remote_control&oldid=4331525”

Posted on January 15th, 2023 by  |  No Comments »

Sydney Opera House ‘No War’ activists face court for paint cans

Tuesday, January 3, 2006

Two activists convicted for painting the words “NO WAR” in five-metre-high red letters on the highest sail of Sydney Opera House in March 2003, are facing court action again to prevent them from auctioning the equipment used to paint the controversial sign.

Dr Will Saunders and David Burgess were sentenced to nine months periodic detention and ordered to pay the Opera House Trust $151,000 for malicious damage to the building on March 18, 2003.

The pair spent six months in jail for painting the slogan on one of the sails of the Opera House on the eve of the invasion of Iraq. The protesters say they wish to auction the equipment for humanitarian causes in Iraq.

Police confiscated the paint can and two brushes used in the incident and have now applied for a court order to have the can and brushes destroyed. They are saying such an auction would contravene proceeds of crime laws.

Saunders said they wanted to auction the can and send the proceeds to humanitarian causes in Iraq. According to The Australian newspaper, Mr Saunders said the auction could be conducted by a registered charity to raise money for the Mother and Child Hospital in Basra.

He said the can should also be preserved as an important piece of Sydney history.

“We want to give the surplus money that we’ve raised, and anything extra we can make from an auction – not only the paint pot … I think we can raise many many thousands of dollars,” he said. “We’d be happy to come to any reasonable arrangement with the police about how this auction takes place … it’s just mean beyond belief, petty-minded just to destroy it.”

The matter will go before a Sydney court on January 16.

Meanwhile, the world-famous Sydney Opera House is one of 21 international landmarks short-listed to become the new Seven Wonders of the World. The list includes modern landmarks such as Paris’ Eiffel Tower and older candidates like the Colosseum in Rome and China’s Great Wall.

Retrieved from “https://en.wikinews.org/w/index.php?title=Sydney_Opera_House_%27No_War%27_activists_face_court_for_paint_cans&oldid=2465800”

Posted on January 14th, 2023 by  |  No Comments »

Keep your eyes peeled for cosmic debris: Andrew Westphal about Stardust@home

Sunday, May 28, 2006

Stardust is a NASA space capsule that collected samples from comet 81P/Wild (also known as “Wild 2) in deep space and landed back on Earth on January 15, 2006. It was decided that a collaborative online review process would be used to “discover” the microscopically small samples the capsule collected. The project is called Stardust@home. Unlike distributed computing projects like SETI@home, Stardust@home relies entirely on human intelligence.

Andrew Westphal is the director of Stardust@home. Wikinews interviewed him for May’s Interview of the Month (IOTM) on May 18, 2006. As always, the interview was conducted on IRC, with multiple people asking questions.

Some may not know exactly what Stardust or Stardust@home is. Can you explain more about it for us?

Stardust is a NASA Discovery mission that was launched in 1999. It is really two missions in one. The primary science goal of the mission was to collect a sample from a known primitive solar-system body, a comet called Wild 2 (pronounced “Vilt-two” — the discoverer was German, I believe). This is the first [US]] “sample return” mission since Apollo, and the first ever from beyond the moon. This gives a little context. By “sample return” of course I mean a mission that brings back extraterrestrial material. I should have said above that this is the first “solid” sample return mission — Genesis brought back a sample from the Sun almost two years ago, but Stardust is also bringing back the first solid samples from the local interstellar medium — basically this is a sample of the Galaxy. This is absolutely unprecedented, and we’re obviously incredibly excited. I should mention parenthetically that there is a fantastic launch video — taken from the POV of the rocket on the JPL Stardust website — highly recommended — best I’ve ever seen — all the way from the launch pad, too. Basically interplanetary trajectory. Absolutely great.

Is the video available to the public?

Yes [see below]. OK, I digress. The first challenge that we have before can do any kind of analysis of these interstellar dust particles is simply to find them. This is a big challenge because they are very small (order of micron in size) and are somewhere (we don’t know where) on a HUGE collector— at least on the scale of the particle size — about a tenth of a square meter. So

We’re right now using an automated microscope that we developed several years ago for nuclear astrophysics work to scan the collector in the Cosmic Dust Lab in Building 31 at Johnson Space Center. This is the ARES group that handles returned samples (Moon Rocks, Genesis chips, Meteorites, and Interplanetary Dust Particles collected by U2 in the stratosphere). The microscope collects stacks of digital images of the aerogel collectors in the array. These images are sent to us — we compress them and convert them into a format appropriate for Stardust@home.

Stardust@home is a highly distributed project using a “Virtual Microscope” that is written in html and javascript and runs on most browsers — no downloads are required. Using the Virtual Microscope volunteers can search over the collector for the tracks of the interstellar dust particles.

How many samples do you anticipate being found during the course of the project?

Great question. The short answer is that we don’t know. The long answer is a bit more complicated. Here’s what we know. The Galileo and Ulysses spacecraft carried dust detectors onboard that Eberhard Gruen and his colleagues used to first detect and them measure the flux of interstellar dust particles streaming into the solar system. (This is a kind of “wind” of interstellar dust, caused by the fact that our solar system is moving with respect to the local interstellar medium.) Markus Landgraf has estimated the number of interstellar dust particles that should have been captured by Stardust during two periods of the “cruise” phase of the interplanetary orbit in which the spacecraft was moving with this wind. He estimated that there should be around 45 particles, but this number is very uncertain — I wouldn’t be surprised if it is quite different from that. That was the long answer! One thing that I should say…is that like all research, the outcome of what we are doing is highly uncertain. There is a wonderful quote attributed to Einstein — “If we knew what we were doing, it wouldn’t be called “research”, would it?”

How big would the samples be?

We expect that the particles will be of order a micron in size. (A millionth of a meter.) When people are searching using the virtual microscope, they will be looking not for the particles, but for the tracks that the particles make, which are much larger — several microns in diameter. Just yesterday we switched over to a new site which has a demo of the VM (virtual microscope) I invite you to check it out. The tracks in the demo are from submicron carbonyl iron particles that were shot into aerogel using a particle accelerator modified to accelerate dust particles to very high speeds, to simulate the interstellar dust impacts that we’re looking for.

And that’s on the main Stardust@home website [see below]?

Yes.

How long will the project take to complete?

Partly the answer depends on what you mean by “the project”. The search will take several months. The bottleneck, we expect (but don’t really know yet) is in the scanning — we can only scan about one tile per day and there are 130 tiles in the collector…. These particles will be quite diverse, so we’re hoping that we’ll continue to have lots of volunteers collaborating with us on this after the initial discoveries. It may be that the 50th particle that we find will be the real Rosetta stone that turns out to be critical to our understanding of interstellar dust. So we really want to find them all! Enlarging the idea of the project a little, beyond the search, though is to actually analyze these particles. That’s the whole point, obviously!

And this is the huge advantage with this kind of a mission — a “sample return” mission.

Most missions rather do things quite differently… you have to build an instrument to make a measurement and that instrument design gets locked in several years before launch practically guaranteeing that it will be obsolete by the time you launch. Here exactly the opposite is true. Several of the instruments that are now being used to analyze the cometary dust did not exist when the mission was launched. Further, some instruments (e.g., synchrotrons) are the size of shopping malls — you don’t have a hope of flying these in space. So we can and will study these samples for many years. AND we have to preserve some of these dust particles for our grandchildren to analyze with their hyper-quark-gluon plasma microscopes (or whatever)!

When do you anticipate the project to start?

We’re really frustrated with the delays that we’ve been having. Some of it has to do with learning how to deal with the aerogel collectors, which are rougher and more fractured than we expected. The good news is that they are pretty clean — there is very little of the dust that you see on our training images — these were deliberately left out in the lab to collect dust so that we could give people experience with the worst case we could think of. In learning how to do the scanning of the actual flight aerogel, we uncovered a couple of bugs in our scanning software — which forced us to go back and rescan. Part of the other reason for the delay was that we had to learn how to handle the collector — it would cost $200M to replace it if something happened to it, so we had to develop procedures to deal with it, and add several new safety features to the Cosmic Dust Lab. This all took time. Finally, we’re distracted because we also have many responsibilities for the cometary analysis, which has a deadline of August 15 for finishing analysis. The IS project has no such deadline, so at times we had to delay the IS (interstellar, sorry) in order to focus on the cometary work. We are very grateful to everyone for their patience on this — I mean that very sincerely.

And rest assured that we’re just as frustrated!

I know there will be a “test” that participants will have to take before they can examine the “real thing”. What will that test consist of?

The test will look very similar to the training images that you can look at now. But.. there will of course be no annotation to tell you where the tracks are!

Why did NASA decide to take the route of distributed computing? Will they do this again?

I wouldn’t say that NASA decided to do this — the idea for Stardust@home originated here at U. C. Berkeley. Part of the idea of course came…

If I understand correctly it isn’t distributed computing, but distributed eyeballing?

…from the SETI@home people who are just down the hall from us. But as Brian just pointed out. this is not really distributed computing like SETI@home the computers are just platforms for the VM and it is human eyes and brains who are doing the real work which makes it fun (IMHO).

That said… There have been quite a few people who have expressed interested in developing automated algorithms for searching. Just because WE don’t know how to write such an algorithm doesn’t mean nobody does. We’re delighted at this and are happy to help make it happen

Isn’t there a catch 22 that the data you’re going to collect would be a prerequisite to automating the process?

That was the conclusion that we came to early on — that we would need some sort of training set to be able to train an algorithm. Of course you have to train people too, but we’re hoping (we’ll see!) that people are more flexible in recognizing things that they’ve never seen before and pointing them out. Our experience is that people who have never seen a track in aerogel can learn to recognize them very quickly, even against a big background of cracks, dust and other sources of confusion… Coming back to the original question — although NASA didn’t originate the idea, they are very generously supporting this project. It wouldn’t have happened without NASA’s financial support (and of course access to the Stardust collector). Did that answer the question?

Will a project like this be done again?

I don’t know… There are only a few projects for which this approach makes sense… In fact, I frankly haven’t run across another at least in Space Science. But I am totally open to the idea of it. I am not in favor of just doing it as “make-work” — that is just artificially taking this approach when another approach would make more sense.

How did the idea come up to do this kind of project?

Really desperation. When we first thought about this we assumed that we would use some sort of automated image recognition technique. We asked some experts around here in CS and the conclusion was that the problem was somewhere between trivial and impossible, and we wouldn’t know until we had some real examples to work with. So we talked with Dan Wertheimer and Dave Anderson (literally down the hall from us) about the idea of a distributed project, and they were quite encouraging. Dave proposed the VM machinery, and Josh Von Korff, a physics grad student, implemented it. (Beautifully, I think. I take no credit!)

I got to meet one of the stardust directors in March during the Texas Aerospace Scholars program at JSC. She talked about searching for meteors in Antarctica, one that were unblemished by Earth conditions. Is that our best chance of finding new information on comets and asteroids? Or will more Stardust programs be our best solution?

That’s a really good question. Much will depend on what we learn during this official “Preliminary Examination” period for the cometary analysis. Aerogel capture is pretty darn good, but it’s not perfect and things are altered during capture in ways that we’re still understanding. I think that much also depends on what question you’re asking. For example, some of the most important science is done by measuring the relative abundances of isotopes in samples, and these are not affected (at least not much) by capture into aerogel.

Also, she talked about how some of the agencies that they gave samples to had lost or destroyed 2-3 samples while trying to analyze them. That one, in fact, had been statically charged, and stuck to the side of the microscope lens and they spent over an hour looking for it. Is that really our biggest danger? Giving out samples as a show of good faith, and not letting NASA example all samples collected?

These will be the first measurements, probably, that we’ll make on the interstellar dust There is always a risk of loss. Fortunately for the cometary samples there is quite a lot there, so it’s not a disaster. NASA has some analytical capabilities, particularly at JSC, but the vast majority of the analytical capability in the community is not at NASA but is at universities, government labs and other institutions all over the world. I should also point out that practically every analytical technique is destructive at some level. (There are a few exceptions, but not many.) The problem with meteorites is that except in a very few cases, we don’t know where they specifically came from. So having a sample that we know for sure is from the comet is golden!

I am currently working on my Bachelor’s in computer science, with a minor in astronomy. Do you see successes of programs like Stardust to open up more private space exploration positions for people such as myself. Even though I’m not in the typical “space” fields of education?

Can you elaborate on your question a little — I’m not sure that I understand…

Well, while at JSC I learned that they mostly want Engineers, and a few science grads, and I worry that my computer science degree with not be very valuable, as the NASA rep told me only 1% of the applicants for their work study program are CS majors. I’m just curious as to your thoughts on if CS majors will be more in demand now that projects like Stardust and the Mars missions have been great successes? Have you seen a trend towards more private businesses moving in that direction, especially with President Bush’s statement of Man on the Moon in 2015?

That’s a good question. I am personally not very optimistic about the direction that NASA is going. Despite recent successes, including but not limited to Stardust, science at NASA is being decimated.

I made a joke with some people at the TAS event that one day SpaceShipOne will be sent up to save stranded ISS astronauts. It makes me wonder what kind of private redundancy the US government is taking for future missions.

I guess one thing to be a little cautious about is that despite SpaceShipOne’s success, we haven’t had an orbital project that has been successful in that style of private enterprise It would be nice to see that happen. I know that there’s a lot of interest…!

Now I know the answer to this question… but a lot do not… When samples are found, How will they be analyzed? Who gets the credit for finding the samples?

The first person who identifies an interstellar dust particle will be acknowledged on the website (and probably will be much in demand for interviews from the media!), will have the privilege of naming the particle, and will be a co-author on any papers that WE (at UCB) publish on the analysis of the particle. Also, although we are precluded from paying for travel expenses, we will invite those who discover particles AND the top performers to our lab for a hands-on tour.

We have some fun things, including micromachines.

How many people/participants do you expect to have?

About 113,000 have preregistered on our website. Frankly, I don’t have a clue how many will actually volunteer and do a substantial amount of searching. We’ve never done this before, after all!

One last thing I want to say … well, two. First, we are going to special efforts not to do any searching ourselves before we go “live”. It would not be fair to all the volunteers for us to get a jumpstart on the search. All we are doing is looking at a few random views to make sure that the focus and illumination are good. (And we haven’t seen anything — no surprise at all!) Also, the attitude for this should be “Have Fun”. If you’re not having fun doing it, stop and do something else! A good maxim for life in general!

Retrieved from “https://en.wikinews.org/w/index.php?title=Keep_your_eyes_peeled_for_cosmic_debris:_Andrew_Westphal_about_Stardust@home&oldid=4608360”

Posted on January 14th, 2023 by  |  No Comments »

Shooter at Kansas City mall kills three

Sunday, April 29, 2007

A gunman killed three people and wounded two others Sunday after a shooting occurred at the Ward Parkway shopping center in Kansas City, Missouri. The shooting took place at around 4 p.m. local time (2200 UTC). The gunman fired shots from the parking lot, then entered the mall and continued shooting. He himself was then shot, presumably by local police.

Retrieved from “https://en.wikinews.org/w/index.php?title=Shooter_at_Kansas_City_mall_kills_three&oldid=440029”

Posted on January 13th, 2023 by  |  No Comments »