H O M E
Public Experiments
Great Giza Pyramid
Atlantis
Multiple Universes
Climate Change:
2008-2013
Exploding Planet
Base on Mars
"Mysteries" Series
Video Library
Mission
Resources
HRVG
CRV Instruction
CRV History and Resources
SRV
IRVA
Eight Martinis (Magazine)
Farsight Press
RSS feed
Corporate Structure
FARSIGHT'S STORE
CONTACT US
Donate to The Farsight Interview
Subscribe to the Farsight Newsletter
Bookmark and Share

Initial Prediction Results Involving Targets for 1 June 2008

Months prior to 1 June 2008, remote viewers involved in this study remote viewed targets that had not yet been chosen. That is, the sessions were done first, and the targets were assigned to those sessions afterward. The sessions, the target pool, and the list of 999 randomized target assignments were posted in encrypted form on this web site to download prior to 1 June 2008. On 4 June 2008, the closing Dow Jones Industrial Average was used to determine which among 999 randomized target assignments would be applied to the remote-viewing sessions. Thus, the remote-viewing sessions predicted which targets would be assigned to those sessions. The table below contains one interpretive measure of how well the viewers did.

The table below contains the "clarity scores" for the remote-viewing sessions conducted for targets with target dates of 1 June 2008. The scores are explained farther below.

Instructions: Click on each viewer's name for more detailed session evaluation information.

Viewer T1 T4 T7 T10 T13 T16 T19 T22
CRV                

Darryl Smith

3 3 3 3        

Viewer 423

2 3 3 0        

Pat Sage

3 3 3 3        
                 
HRVG                

Dick Allgire

2 1 1 0 1.5 2 3*  

Sita Seery

2 2 0 1 1 1    

Maria Naulty

1.5 0 3 3        

Debra Duggan-Takagi

3 1 2 2        

Anne M. Koide

3 3 3 3        

* This session was done after 1 June 2008 and was not included in the encrypted files.
# These sessions were corrupted due to a faulty closing process and cannot be included in the study.

Clarity Scores applicable only to 2008 targets

"Clarity scores" evaluate the sessions with respect to the known and verifiable characteristics of the target. Clarity scores can range from 0 to 3, and they convey the following meaning:

3: The known and verifiable target aspects are described exceptionally well with few, minor, or no decoding errors.
2: The known and verifiable target aspects are described well. There may be some notable decoding errors.
1: The known and verifiable target aspects are described minimally. There may also be significant decoding errors.
0: The known and verifiable target aspects are described very poorly or not at all.

Decoding errors occur when a remote viewer perceives something that is real at the target, but the description of this perception is not entirely correct. Again, the perception is real, but the description of it is only partially accurate. For example, if someone describes a city with tall skyscrapers as a mountain range, that is a decoding error. The perception is correct in terms of the topology, but the characterization of it as a mountain range is incorrect. Also, if a person places trees or animals in a barren natural landscape, that is a decoding error. The perception of a natural landscape is correct, but the conscious mind added things that it thought would be normal for a natural landscape. Experienced remote viewers are trained to minimize decoding errors, and analysts are trained to discount decoding errors that would be more common with certain types of targets.