api - Apigee SpikeArrest Sync Across MessageProcessors (MPs) -


our organisation migrating apigee.

i have problem similar one, due fact stackoverflow newbie , have low reputation couldn't comment on it: apigee - spikearrest behavior

if there way merge 2 questions please let me know.

so, in our organisation have 6 messageprocessors (mp) , assume working in strictly round robin manner.

please see config (it applied target endpoint of apiproxy):

<?xml version="1.0" encoding="utf-8" standalone="yes"?>   <spikearrest async="false" continueonerror="false" enabled="true" name="spikearrest-1">   <displayname>spikearrest-1</displayname>   <faultrules/>   <properties/>   <identifier ref="request.header.some-header-name"/>   <messageweight ref="request.header.weight"/>   <rate>3pm</rate> </spikearrest> 

i have rate of 3pm, means 1 hit each 20sec, calculated according apigeedoc1.

the problem instead of 1 successful hit every 20sec 6 successful ones in range of 20sec , spikearrest error, meaning hit once each mp in round robin manner.

this means 6 hit per 20 sec api backend instead of desired 1 hit per 20sec.

is there way sync spikearrests across mps?

concurrentratelimit doesn't seem help...

any or advice appreciated!

thanks!

spikearrest has no ability distributed across message processors. used stopping large bursts of traffic, not controlling traffic @ levels suggesting (3 calls per minute). put in proxy request preflow , abort if traffic high.

the closest can 3 per minute using spikearrest round robin message processors 1 per minute, result in 6 calls per minute. can specify spikearrests "n per second" or "n per minute", converted "1 per 1/n second" or "1 per 1/n minute" mentioned above.

do support 1 call every 20 seconds on backend? if trying support 1 call every 20 seconds per user or app, suggest try accomplish using quota policy. quotas can share counter across message processors. use quotas traffic (instead of per user or per app) specifying quota identifier constant. allow 3 per minute, come in @ same time during minute.

if trying protect against overtaxing backend, concurrentratelimit policy used.

the last solution implement custom code.


update address further questions:

restating:

  • 6 message processors handled round robin
  • want 4 apps each allowed 5 calls per second
  • want rest of apps share 10 calls per second

to kind of granularity looking for, you'll need use quotas. unfortunately can't set quota have "per second" value on distributed quota (distributed quota shares count among message processors rather having each message processor have own counter). best can per minute, in case 300 calls per minute. otherwise can use non-distributed quota (dividing quota between 6 message processors), issue you'll have there calls land on mps rejected while others accepted, can confusing developers.

for distributed quotas you'd set 300 calls per minute in api product (see the docs), , assign product 4 apps. then, in code, if product not assigned current api call's app, you'd use quota hardcoded 10 per second (600 per minute) , use constant identifier rather client_id, other traffic uses quota.

quotas don't keep submitting requests simultaneously, , i'm assuming backend can't handle 1200+ requests @ same time. you'll need smooth traffic using spikearrest policy. you'll want allow maximum traffic through spikearrest backend can handle. protect against traffic spikes, you'll traffic rejected allowed quota. spikearrest policy should checked before quota, rejected traffic not counted against app's quota.

as can see, configuring situations yours more of art science. suggestion significant performance/load testing, , tune until find correct values. if can figure out how use non-distributed quotas acceptable performance , predictability, let work per second numbers instead of per minute numbers, make massive spikes less likely.

good luck!


Comments

Popular posts from this blog

java - Intellij Synchronizing output directories .. -

git - Initial Commit: "fatal: could not create leading directories of ..." -