will the great 2010 Fraser sockeye forecast start to downgrade?

The latest reports coming out of the Pacific Salmon Commission are showing that marine migrations are largely over for this year. Marine test fisheries have either completely stopped, or simply not catching much.

Catch to date suggests that about 12.7  million Fraser sockeye have been caught in a variety of First Nation (1.5 million), Recreational (200,000 — a number pulled from someone’s hat essentially), Commercial (over 9 million – majority to purse seine fleet ~5.5 million) and then just about 1.9 million caught in Washington tribal and commercial fisheries.

Estimates past counting stations on the lower Fraser (hydroacoustic estimates) suggest approximately 11.8 million gone by upriver.

Combine the catch with fish heading upriver and the total run size sits at approximately 24 million. Total run size estimates suggest over 34 million.

10 million are apparently somewhere between the Salish Sea (Strait of Georgia) and Mission…

There are some big counts estimated past Mission, lower Fraser, on a daily basis. Some of the biggest of the year the other day 544,000, with averages more into the 200,000 – 300,000 range. Even at those big numbers, it will still take many days of sustained migration to reach the 34 million in-season estimated total run size. (And a far, far, far cry from the pre-season estimate of ~11 million).

Might we reach one of those mysterious disappearances again…  Total in-season estimate minus marine migration minus catch minus escapement estimated past lower Fraser counting stations = hey… missing fish…

Maybe the past missing sockeye moved into some of those new housing developments moving up the side of the Fraser Valley?

We’ll see, what happens this year.  Still about 30% of the in-season predicted run to actually materialize in the River…

5 thoughts on “will the great 2010 Fraser sockeye forecast start to downgrade?

  1. Brian

    “Recreational (200,000 — a number pulled from someone’s hat essentially)”

    Actually there is more to it than that. It involves some tireless work by dedicated individuals who interview many, many anglers during the recreational fishery. I used to do creel surveys for MOE (summer and winter). I don’t remember ever using a hat other than to keep the sun off of my head. I am sure the individuals mentioned at the end of the document will be able to help you further.

    http://www.pac.dfo-mpo.gc.ca/fraserriver/recfishery.htm

    http://www.pac.dfo-mpo.gc.ca/fraserriver/recreational/HowFraserRiverCreelWorks.pdf

  2. salmon guy Post author

    thanks for that – I’m sure some folks will find that helpful. I know the creel survey pretty well, have also done them myself.

    Key words here: “estimates”:

    In a Nut Shell (using Harvest as the example)
    To estimate harvest in the Fraser River recreational fishery, we use two key pieces of information:
    1) a Rate of Harvest estimate: in the Fraser Creels, this is expressed as the number fish harvested per hour of effort (or fish harvested per angler-hour); and,
    2) an Angler Effort estimate: this is expressed in hours of angling (or angler-hours).

    And one key calculation:
    3) the Harvest estimate is generated by multiplying together the two estimates above (Rate of Harvest x Angler Effort = Harvest).

    The thing with multiplying two “estimates” together is it greatly increases the margin of error – exponentially in some case. And thus, my “pull it from a hat” comment. Creel surveys are notorious for being nothing more than estimates. Yes, there are some great folks out there doing the work, and yes, it does give fish managers a better idea of what sort of pressure is there… but… it’s still a wild estimate.

    DFO itself suggests that on the Early-timed Chinook that they can’t even get 10 per cent coverage of the sport fishers out. Then add in an industry that has a vested interest in continuing to harvest as much as possible (or someone else will) and then we get what are called “fish stories”. Unfortunately, this is most likely under-reporting. Reading some of the sport fishing forums earlier this year, some sport fishing operators were suggesting that they would under-report or refuse to report. This sort of stuff takes away from the folks that accurately report and take it very seriously.

    The ongoing challenge of looking after public resources…

  3. Brian

    “The thing with multiplying two “estimates” together is it greatly increases the margin of error – exponentially in some case.”

    How? Show me….What error are you referring to? Do you know these particular creel results well enough to come to that conclusion? I imagine you didn’t contact those involved in the document.

    Creel surveys do involve estimates; however, they can also involve some important biological sampling (i.e. CWT). It is not practical (nor is it feasible in may cases) to obtain a complete census of every angler, at every access point, in every boat, and at all times of the day. You should know that if you have done creel surveys. These creel surveys have sampling program/design behind them. Key things are sample size, randomization of sampling (i.e. stratified by simple random sample, systematic, etc.), replication and what level of precision you want for management purposes. The only issue I have is the naked estimate. These estimates should have precision reported with them.

    “DFO itself suggests that on the Early-timed Chinook that they can’t even get 10 per cent coverage of the sport fishers out. Then add in an industry that has a vested interest in continuing to harvest as much as possible (or someone else will) and then we get what are called “fish stories”. Unfortunately, this is most likely under-reporting. Reading some of the sport fishing forums earlier this year, some sport fishing operators were suggesting that they would under-report or refuse to report. This sort of stuff takes away from the folks that accurately report and take it very seriously.”

    It is important to note that precision essentially depends only on the absolute sample size, not the relative fraction of the population sampled. Just because they only obtain 10% of the coverage does not necessarily in itself make the survey invalid. Example: Let’s say you are making a stew and you stir it really good before you sample it with a teaspoon to see if the peas are cooked…..Then you increase the amount of stew and stir it really well again and sample it with a teaspoon to check the peas. Provided you stir both pots the same (randomize), sampling from that smaller pot of stew can be just as valid as from taking a sample from that larger pot of stew. Another example: A sample of 1000 people taken from Canada (population of 33,000,000) is just as precise as a sample of 1000 people taken from the US (population of 333,000,000). For spawning ground sex ratio, it is actually more important to sample spatially and temporally over time than to obtain a threshold number of carcasses or a fixed, arbitrary percentage.

    People that do not cooperate with creel surveys are only screwing themselves and are doing a disservice to management. Personally, anglers that do not cooperate have no business complaining about “mismanagement” because it is their actions that actually contribute to challenges you mention. In my experience with the province, most anglers are very willing to cooperate with creel surveys.

  4. salmon guy Post author

    thanks Brian, appreciate the engagement.

    I know the creel surveys well enough, and fisherfolks well enough, that I wouldn’t trust the results (for ‘mngmt purposes’) as far as i can throw them. They paint a picture, more then acting as a precise photograph.

    2 x 2 = 4
    oh, but whoops we made an error, it should actually be:
    3 x 3 = 9 or even 2 x 3 = 6

    Over two times difference in some cases. I made an error though, it’s not exponential, but by factors. Bigger error at smaller numbers.

    Love the stew analogy; unfortunately peas don’t make a stew… i never suggested ‘invalid’; simply highlighting that “estimate” is a key part of the equation.
    Est. x Est. (as outlined by DFO guidelines) = bigger estimate. Simple multiplication.

    precise – “The ability of a measurement to be consistently reproduced.” Or my favorite from the online dictionary, as an adjective, for example “precision bombing”… it means “Of or characterized by accurate action”. Really… how ‘precise’ is “precision bombing”…? Seems like a bit of an oxymoron.

    Not so sure about your example. The precision might very well depend on what it is one is sampling for. For example, using peas ‘done-ness’ as a test for whether a couple pots of stew are cooked or not… i’m not so sure I’d be having stew at your place… : )

    Sampling 1000 people from Canada on… for example… asking who the Prime Minister of Canada is (knowledge of world leaders) and then sampling 1000 people in the U.S. might give you some rather different results.

    Enter in species of salmon to a random fisher on a stream bank – or male or female salmon – and results begin to vary. Some folks can’t tell a humpy from a sockeye…

    And yes, CWT (coded wire tag) information has some value. Sadly, though, DFO is relying far too much on CWT information to manage Chinook. As you well know, CWTs are only present in hatchery fish. Should we really be managing populations based on information from hatchery fish? Does this paint an accurate picture of population health?

    Don’t get me wrong though, I don’t want to be mistaken for suggesting that creel surveys aren’t valuable sources of information. I am a big proponent of folks actually being out there with their hands and feet in the creeks and rivers; interacting with people who are interacting with the ‘resource’. No better information. Sure beats computer modeling, or trying to get estimates of fishing pressures from satellite images, or the like.

    Similar to the fantastic network of coastal patrolmen that DFO used to utilize – up and down the coast of BC. People out in their boats, with the resource every day, getting a feeling for # of salmon, # of bears, # of wolves, spawning ground health, fry rearing habitat health, # of seals, # of orcas, # of whitesided dolphin and so on and so on.

    Now? lots of computer modeling, and basing decisions on sampling a couple thousand of 33,000,000 and so on.

    I appreciate the point you’re getting at. My issue isn’t with sampling statistics per se… more that there appears to be far too much reliance on sampling small portions and then relying on this as gospel. There is a lot more knowledge out there then random sampling and statistics… thousands of years of knowledge, and lifetimes of knowledge, and communities of knowledge that just aren’t factored into the equations enough (literally and figuratively).

    For example, I’ve had First Nations elders in the Yukon explain to me how they could gauge the strength of runs depending on which side of the river the salmon swam up and certain places where they held in-river to rest. The Western-scientific world looks at that kind of information and rolls their collective eyes and writes it off to superstition and folklore…

    why look to thousands of years of knowledge, or even a couple generations of settler culture community knowledge, when we have statistics, random sampling, CWT information from hatchery fish, and computer modeling techniques?

  5. Brian

    “2 x 2 = 4
    oh, but whoops we made an error, it should actually be:
    3 x 3 = 9 or even 2 x 3 = 6

    Over two times difference in some cases. I made an error though, it’s not exponential, but by factors. Bigger error at smaller numbers.

    Love the stew analogy; unfortunately peas don’t make a stew… i never suggested ‘invalid’; simply highlighting that “estimate” is a key part of the equation.
    Est. x Est. (as outlined by DFO guidelines) = bigger estimate. Simple multiplication.”

    Just because you have a bigger estimate doesn’t necessarily mean that your error is larger. There is more involved than just simple multiplication. It depends on sampling design (sample size, randomization, replication, precision desired, etc). Standard error is dependent on the standard deviation and sample size. If the individual data is highly variable or the sample size is insufficent then this can lead to higher standard error, but this is hard to determine when you have not looked at the analysis or have any idea of the sampling program. Conversly, if the individual data is not as variable and/or the sample size is quite large this will tend to reduce standard error. In order to determine the appropriate sample size, you will need to first specify some measure of precision that is required to be obtained. For example, a particular study or specific species may require that the results be accurate to within 5% of the true value. It just cannot be assumed that the data is has lots of error when you haven’t even examined the contents. This is why I highly suggested you talk to the individuals in the document I provided. I am sure you have some good friends that might know a few things, but the individuals mentioned in the document are the one you should be speaking to. Anything else and you are just speculating (in my opinion).

    When you use randomized block design you can add the estimates from two different methods together. For example, to estimate the total catch you would take the number of fish/boat or angler and multiply it by the number of boats. To find the standard error you would multiply it by the same number of boats. To find the estimated standard error of the grand total you would take square root of the sum of squares of standard errors in each stratum (or method). If you increase the number of boats sampled this would tend to bring down standard error – not increase it. Also, there is more than one standard error formula – not one that fits all. That is why knowing the data and the analysis is important. Standard error basically says, “if I were to repeat the study again how much will my result vary from the previous result”. Last week, I learned this from my refresher stats course. The instructor was a very prominent statistician at SFU. This why I was puzzled by your response last week.

    The stew example was to illustrate that precision is not dependent on the relative fraction of the population sampled. If you test a stew in a smaller pot with a teaspoon or tablespoon you wouldn’t necessarily test a stew in a larger pot with a coffee mug full of stew….right? If you did, I am not sure if I would be showing up for dinner very soon at your house either…lol.

    “Sampling 1000 people from Canada on… for example… asking who the Prime Minister of Canada is (knowledge of world leaders) and then sampling 1000 people in the U.S. might give you some rather different results.”

    This is true. If you want more specific information like this then you would put more of your sampling into Canada, but the idea about the relative fraction not being as important as absolute sample size is still the same.

    “Now? lots of computer modeling, and basing decisions on sampling a couple thousand of 33,000,000 and so on. ”

    These estimates to the best of my knowledge are not based on computer modelling. Read what stated earlier…..You can’t just say that sampling is insufficient without doing some pre-planning. It might be that sampling low numbers is not sufficient, but you don’t know that until you know that until you look at your sample size in relation to the standard deviation of your data and the precision you wish to obtain.

    “I appreciate the point you’re getting at. My issue isn’t with sampling statistics per se… more that there appears to be far too much reliance on sampling small portions and then relying on this as gospel.”

    I don’t know of anyone I work with that says what is done is “gospel”. Humans make mistakes – not just in biological sciences. I am sure people doing this creel would like to get as large a sample size as possible, but there are logistics and budgets to consider. Unfortunately, staff cannot be everywhere all the time.

    “For example, I’ve had First Nations elders in the Yukon explain to me how they could gauge the strength of runs depending on which side of the river the salmon swam up and certain places where they held in-river to rest. The Western-scientific world looks at that kind of information and rolls their collective eyes and writes it off to superstition and folklore.”

    I appreciate the way First Nations use to harvest and manage their fisheries, but playing field is not the same as it use to be. There are more people involved in catching salmon and utilizing their habitats. Whether you like the commericial fishery or not they would need timing information when the fish are out in the ocean. Although seeing them in the terminal areas may have validity it might not be the best way for managing the fishery in the ocean; however, I am open to all possibilities. Instead of having people into different camps on these issues people need to start thinking collectively and working together. Right now it is “us vs. them” attitude which only creates more distrust. Rodney King couldn’t have said it any better.

    Secondly, I would imagine there would be some level of uncertainty attached to these past methods. I am not trying to downplay them, but being honest of the fact that uncertainty is unavoidable with the salmon. Even your August forecast (on cbc.ca) was blown out of the water (see…forecasting isn’t so easy afterall). In addition, there are treaty obligations which Canada and the US have to fufill. I realize you may not agree with the test fishery, but the estimates this year have mapped on quite well. We won’t know the about the late run for a little while yet. These inseason numbers are not spawning ground escapement estimates which get erronously reported in the media.

    Peace out.

Leave a Reply

Your email address will not be published. Required fields are marked *