It is very simple to use S3 as a storage for your static content in Rails application. Just add paperclip and aws-sdk gems. But what to do if you want to hide the direct links to S3 items, or even restrict access to some files by user’s roles and access rights? Here is a working example: Continue reading “Hide and protect your AWS S3 endpoint (Rails+Nginx example)”
Here is the nice trick to achieve Conditional GET request for lists (Index operation in classic REST terms). You know about this mechanism for single items (show operation): browser asks a resource for the first time, cache it’s last_modified value, and send “If-Modified-Since” header in the next request. A server checks database for resorce.update_at value, and responds “200 OK” with content as usual (if resource is newer), or responds with “304” without content if resource was not changed.
You can see an economy for computing resources, traffic, parsing and so on. But how to implement this technique for lists? Where is no “updated_at” attribute for the lists…
But don’t give up! Just get a newer resourse in the list, and use it’s “updated_at” attribute.
Here is an example for Ruby on Rails:
updated_at = models.max_by(&:updated_at).try(:updated_at) || Time.at(1)
The last part is a trick for empty lists. Easy!
Caveats: it will not work when you destroy a model from collection by real deleting from the database, because the newer “updated_at” value will not change or even becomes early. Browser will not get actual (changed) content. Use Paranoid gem (or mark it as something like ‘is_deleted’) instead, or switch to ETag.
How to detect what is on the uploaded picture? Cat or dog? If people, which age and gender? Here is a short manual. Continue reading “Find cat at the image: visual recognition”
Of course, you know, there are at least two popular paradigm of programming: imperative and functional. But there is another, very interesting paradigm, which I call as “dataflow”. In this post I want to explain why it is good, and how to use it to build web-related services. Continue reading “How to write a code within dataflow paradigm”
In this post we will discuss about the approach to building the frontend of the websites in terms of the microservices paradigm, dataflow and communication between services through the exchange of events (messages). I will use Backbone.js framework as an example.
The story began where I was once asked how to build such HTTP-gate (api endpoint), which proxies requests to a group of lower-level servers (“upstreams”), without any knowledge about their exact IP address, or their quantity, or “health” state. In addition, each request must be processed by the least loaded server in the moment, and the response should be sent synchronously – as plain HTTP response (not as secondary callback).
I do not pretend to be original, but will tell you how we did it: Continue reading “Queues: web cluster without upstreams orchesration”
Even my experienced colleagues don’t understand the basic fundamental difference: AMQP promises you guaranteed delivery (and processing), but HTTP – don’t.
Disclaimer: I know that there is written “not guaranteed” in the RabbitMQ documentation. But, Rabbit at least try to do it (and does it very well), while HTTP does not guarantee anything by design.
Let me remind how HTTP works: the client sends a request, and “hangs” waiting for response. If you will abort the execution, or disconnect the client from server by any reason – the answer will be lost forever. Some systems, AFAIK, also will interrupt the processing if client drops the connection.
If the fatal error occurs on server side, we can get the response code (500/503) as well as get nothing. And we don’t know, had our request be processed completely, or partially, or just died when begin to send response to us, or there was deadlock between two “parallel” requests, or it’s such long by it’s nature, or backend is very busy and we were rejected by balancer, or.. I think you understand me. Continue reading “Key difference between AMQP and HTTP in distributed web applications”
Let me remind you how websites was created in past: there was server application that receives the request from the user, processes it, draws the HTML page (or performs the requested operation and draws the similar page) and gives it back to user. Simple rule: the more RPS you can process – the more visitors you can serve.
When internet grows, some people began to counteract with “high load” with typical methods: they setuped nginx as front-server, several backend-servers (upstreams) with copies of their web application, and spreaded the load to them. Randomly (by round-robin) or with a little trick: for example, the first upstream had 1s timeout, second upstream – 2s timeout, and so on. Of course there were more clever schemes.
It was a typical way more than 6 years ago. Continue reading “Where are the queues coming from in web?”
Hello! My name is Grigoriy, and my native language is russian. This is an english mirror of my russian professional blog at www.dobryakov.com. I apologize for all the mistakes in my English – I’m working on it every day.
I’m a developer with full-stack business experience in web industry from sales to top-level technical management. Familiar with wide spectre of popular tools and methodics for whole life cycle of the software business – from communication with consumers to continuous delivery and support the long-term product. I have production experience in building the cloud and SaaS environments, designing distributed SOA systems and in DevOps activity.
You can see and download my full english CV here.
Please feel free to contact me anytime by e-mail email@example.com . Have a nice day!