Showing posts with label rails. Show all posts
Showing posts with label rails. Show all posts

Monday, May 26, 2014

seconds_until_end_of_day expires for Cache in Rails

A colleague of mine had to cache some html until the end of the day but found it difficult because cache expects expires_in which is relative. We cant pass an absolute Time.now :P

So we now have seconds_until_end_of_day in 4.0.2, but we use Rails 3 :(

But patching Time is simple

def seconds_until_end_of_day
  end_of_day.to_i - to_i
end

http://apidock.com/rails/v4.0.2/Time/seconds_until_end_of_day

Tuesday, April 29, 2014

Observers and Migratios

The past day we created a model migration and then added an observer for the model after the migration. Surprisingly, the observer (now added in the application.rb file) was loaded first by environment which then checks for the model and tries to fire a DB query for columns.

This fails and therefore the migration does not go through.

So for future reference, always migrate first and then add the observer! 

Monday, March 10, 2014

Caching Simple Navigation

Morale: Simple Navigation is not that simple

We, in our firm, have different types of user navigation for different types of users. We wanted the flexibility of creating our navigation from the backend(rails) so that there is just one file to maintain with all configurations. Simple Navigation works great in this aspect.

The configuration is complex, but is absolutely generic. We have a couple of navigations to start with, for the normal application and when a user visits the application via the iPad.

The navigation for the normal user takes more than 500ms to render. NOT GOOD! We decided on caching the html. For this we need to split the navigation yet again. With respect to logged in users. Therefore we now have 4 types of navigation, loggedout-normal, loggedin-normal, loggedout-iPad, loggedin-iPad.

We also had some badges on the navigation for unopened items. The next step was to remove these from the navigation file. We created a notifications controller and wrote a small javascript, which takes data from the json response of the show method of this controller, and updated the corresponding sections in the navigation.

Now came the hard part. Caching.

We need to define keys for the Caching itself first.

For a logged out normal user, this would be quite straight forward. Get the locale of the navigation bar. All the users with the same locale got the same navigation bar.

For logged out normal users it would be a little tricky. We have some user specific stuff in the navigation bar such as the username. Therefore, we need the user_id. The user could change his locale and this would trigger a change in the updated_on field on the Users table. This would mean that we need to include this in the key as well. We have a specific link for backend services if the user has admin privileges. For this we need to evaluate the roles of the user. An MD5 has been used to facilitate this.

We need to Cache the non highlighted navigation bar for sanity purposes. Then we plan to highlight the section according to the pages that are rendered. For this we could either create a javascript to add the correct classes or create a stylesheet which highlights the correct section on verification of the correct classes.

We decided to create a stylesheet for this. The stylesheet reads the configuration from the simple_navigation configuration and creates the correct CSS. The file is a LESS file. We extracted the styles to highlight the navigation items into some mixins and used these mixins inside our custom classes.

The next tough part is the sub navigation. While this problem still lurks, we are trying to get away from the sub navigation altogether.

While this post is clearly not technical and is conceptual, this might help some others trying to solve something similar to this.

:)

Tuesday, January 7, 2014

Altering a MySQL Table

In a normal world without a lot of records, altering (adding, deleting columns and indices) a table is an easy job. Have a look at the syntax here.

Its essentially creation of a copy of the original table, applying the alter table to the new table and copying row by row until the end of the table is reached. The essential problem with this is that the row by row copy locks the table because we don't want inconsistencies while we copy. I.e. no writes during the copy.

If the table has lots of rows then this becomes a real problem.

It seems that those intelligent guys have a solution to this too.

One Approach : 
- Make changes on the Slave and Upgrade the Slave to Master.

Another (Facebook's Way) :
- Create a temporary table which is a copy of the main table
- Apply the changes to the temp table
- Add something which could be used to run changes on the temp table
- Copy data from the main table to the temp table
- Lock the main table
- Replay the changes on the temp table
- Rename the temp table to the original table after renaming the original table to something else

Phew

Thanks to Martin for the star in github

http://www.facebook.com/note.php?note_id=430801045932
https://github.com/soundcloud/lhm

Wednesday, December 25, 2013

Bucket Maker

After some days of research and work, I managed to create a gem which focused on Creating buckets for objects and querying them when needed

Have a look at https://github.com/dinks/bucket_maker which is a gem for Rails 3.2 -> 4 .

Comments most appreciated !

Thursday, December 19, 2013

That Great Feeling

Words can't express that feeling when you upload something to a community that everyone can use !

https://rubygems.org/profiles/dvasudevan

:)

Sunday, December 1, 2013

Module extensions 1.9 and 2.0

I did not know that this threw TypeError on 1.9 but passes for 2.0

x = Module.new { def foo; end } 
Module.new { define_method :bar, x.instance_method(:foo) }

From Rails !

Wednesday, October 2, 2013

Statsd

During the last couple of days, I was trying to find myself around Statsd from Etsy. It seems to be a wonderful tool which lets users send stats (or tracking) to a server which in turn could be visualized using tools like Graphite.

The set up is quite simple. Add the Statsd gem to your application's Gemfile and then write a Rack Middleware which sends out information to the predefined server. Before sending out information, we need to setup the Statsd NodeJS Server and give the IP and port for the Gem to work.

Matt Aimonetti has written a great post about setting up the service.

I have found it damn hard to set up Graphite.

The download is easy. git clone the 3 repositories and then run check_dependencies.

Install all the required libraries using pip. Now comes the hard part. A 2D graphics library called Cairo is a dependency for Graphite and a library called py2cairo (pycairo if python >2.7) is to be installed.

I would suggest installing cairo using

sudo brew install cairo

Homebrew is better than MacPorts.

Install pycairo using pip

sudo pip install pycairo

Or

easy_install pycairo

You might get errors when importing the module for cairo. This might be because you installed it via easy_install.

If it does not work (it did not because I have 2 versions of python), symlink from the site-packages/Cairo to the place where python searches for site-packages.

Then run check_dependencies again and you are good to go. I still have to setup Graphite. But I hope I'll have less obstacles after this.

Saturday, September 21, 2013

Experimenting with Rails 4

I had some time this week and I had to try out the Rails 4 and Ruby 2. That's exactly what I did !

I did not try out Key based Cache Expirations or Etag stuff. But I did try out some Server Sent Events stuff. And it went pretty well.

I started out installing Ruby 2 with rvm (I use rvm and don't use to rbenv for some reason). I got through installing 2.0, but when I tried to use a specific gem_set for a test application, I got an error from Rubinius for some odd reason. It said that rbx was to be installed. Somehow it messed up the whole setup and I had to do away with gem_sets. I also had to remove the rbx* installation directory because it was bursting warnings to the console every time the rvm was set.

You must upgrade RVM :)

This benchmark is a great read for 2.0

I wrote the version to the .ruby-version file and that was the start of the project.

I wanted to do something which had some real time communication. Tic Tac Toe seemed to be a nice choice.

There was then this weird problem with sqlite3. It, for some odd reason, could not find the native binding of sqlite3. The path was correctly set but it just could not find the path. I had to spend some hours of debugging to find this out and a lot of googling !

The error was

/usr/local/share/gems/gems/sqlite3-1.3.7/lib/sqlite3.rb:6:in `require': cannot load such file -- sqlite3/sqlite3_ruby (LoadError)

This post saved me at last. Seems like we have to change the actual file of the gem to get this thing running.

In the case of rvm, the file would be under

~/.rvm/gems/ruby-2.0.0-p247/gems/sqlite3-1.3.8/lib/sqlite3.rb

and you will have to change the loading locations relative like

~/.rvm/gems/ruby-2.0.0-p247/gems/sqlite3-X-X-X/ext/sqlite3/sqlite3_native


Once this is done, rails finds it and loads it. This, I think, is NOT good for a Continous Integration Server.

Server Side Events came next. I went through this excellent post on how to get started with SSE. I managed to add Redis pub/sub with SSE after looking through this post. And ultimately used some Thread Processing to get it less Non Blocking as per this post.

Another problem I had was with the servers. I started with Unicorn which was a bad choice. Unicorn kills connections because it is meant for fast processing. For persistent connections, we need to have Puma or Rainbows!

I started using Puma but I did the mistake of not putting

config.preload_frameworks = true 
config.allow_concurrency = true

in the application.rb file. Only one request was permitted because of this flaw. This is to be removed later (I have no idea why this is not default ?).

But that did not solve my problems. Puma constantly gave me errors like

ThreadError: Attempt to unlock a mutex which is locked by another thread

which is not good. It never called the ensure block (don't know why) and I seriously doubt the sockets to be open :|

Thats when I tried Rainbows! I added the rainbow.rb file for server config and started the server. It worked great for me !

I also understood that attr_accesible is no more for rails and its all about Strong Parameters.

I tried adding rails-api to the project (this was my first time). It was only after some time that I realized that the stack for rails-api removes the Rack for Session handling because it aims for the Processing to be as less as possible

Rack::Session::Cookie

I was working with an existing Rails application and I did not want this feature. I tried adding

config.api_only = false

to the application.rb file. But that did not help me.

I had to add the particular require for the api for this to work in the Gemfile

gem 'rails-api', require: 'rails-api/action_controller/api

I wanted to try my bit with the state machine and I managed to write this inside Concerns.

The POC is still incomplete, but I loved doing this after a long time !



Friday, March 15, 2013

Less Caching

We came across this unusual problem when developing. So for starters, there is a Gem that we use and a test application reside in the test folder of the Gem. The Gem uses only Javascript in the app/assets folder and has a minimal Ruby code.

The Gem uses Bootstrap's less version to set the CSS. We use the mixins by Bootstrap and overwrite variables and stuff. The Gem still used SASS and we felt a little odd with writing mixed CSS. Therefore we moved all the stuff to less files and put it all in assets/stylesheets. Nice and organized. 

Now we had the problem. We made a change to the less files and the application did not load them. We had put all our imports in a main file and any change to the main file reflected in the test app. but not if the change were done inside the individual imports.

We wanted to analyze if the problem was with Sprockets or with less-rails. After some probe and hunt we concluded that the problem indeed was with less-rails. The Gem caches the imports which prevent any compilation after the import to the base file (first level).

I tried adding options like

config.consider_all_requests_local = true config.action_controller.perform_caching = false 

inside the environment files, but alas! no change what so ever.

So we had to do the inevitable. Look at files and trigger a restart and cache clear.

The process was to be something like :

if Change? 
  rake tmp:clear 
  powder restart (or touch tmp/restart) [we use pow]

Guard was perfect for this.

2 Guard plugins were used for this :
  guard-process => To run the rake process
  guard-pow => To restart Pow

The file looked something like this

# RUN bundle exec guard -p -i -w ../../ 
guard 'pow' do 
 watch(%r{app/assets/stylesheets/**/.*\.less$}) 
end 

guard 'process', :name => 'ClearCache', :command => 'rake tmp:clear' do
 watch(%r{app/assets/stylesheets/**/.*\.less$}) 
end 

The guard command needed to run with
  -w because we were looking at a change from another directory
  -p for polling
  -i for no interactions

phew..

Wednesday, August 8, 2012

Compiling Ruby 1.9.2 with OS X 10.7

I had to setup a machine which ran 10.7 with Ruby 1.9.2 because the application ran on this version. It was a rails application of version 3.1.

I tried to get the machine set up with rbenv which went smooth.

I tried to do a

rbenv install 1.9.2-p180

Which did not go well at all.

I got errors like

/usr/bin/gcc-4.2 -dynamic -bundle -o ../../../.ext/x86_64-darwin11.3.0/racc/cparse.bundle cparse.o -L. -L../../.. -L/Users/jasonvdm/.rvm/usr/lib -L. -L/usr/local/lib -Wl,-undefined,dynamic_lookup -Wl,-multiply_defined,suppress -Wl,-flat_namespace  -lruby.1.9.1  -lpthread -ldl -lobjc 
compiling readline/usr/bin/gcc-4.2 -I. -I../../.ext/include/x86_64-darwin11.3.0 -I../.././include -I../.././ext/readline -DRUBY_EXTCONF_H=\"extconf.h\" -I/Users/jasonvdm/.rvm/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE   -fno-common -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wshorten-64-to-32 -Wno-long-long  -fno-common -pipe  -o readline.o -c readline.c
readline.c: In function ‘username_completion_proc_call’:
readline.c:1386: error: ‘username_completion_function’ undeclared (first use in this function)
readline.c:1386: error: (Each undeclared identifier is reported only once
readline.c:1386: error: for each function it appears in.)
make[1]: *** [readline.o] Error 1
make: *** [mkmain.sh] Error 1

Which essential means that I needed to update the readline library to this error out of the way. I started out by downloading the readline version 6.1 manually and installing but that failed miserably.

curl -O ftp://ftp.gnu.org/gnu/readline/readline-6.1.tar.gz
tar xzvf readline-6.1.tar.gz
cd readline-6.1
./configure --prefix=/usr/local
make
sudo make install

Then I read this which saved me a lot of time. It essentially says to install readline via rvm. rvm was already installed in the system and readline got installed by the command.

rvm pkg install readline


Be sure to install the ruby version using the --with-readline-dir option. Like so

rvm reinstall 1.9.2 --with-readline-dir=$rvm_path/usr

Now when you try to install with this line, you might get it installed or you might get some errors like

ld: in /usr/local/lib/libiconv.2.dylib, missing required architecture x86_64 in file
collect2: ld returned 1 exit status
make[1]: *** [../../.ext/x86_64-darwin10.5.0/tcltklib.bundle] Error 1
make: *** [mkmain.sh] Error 1

Well, this means that you might have a wrong dylib file version on the machine.

The first this we need to look at now is whether we have XCode. If you have XCode and the version is 4.2 then you will have to check for this file in

/usr/lib/libiconv.2.dylib

Try to execute a file command on this

file /usr/lib/libiconv.2.dylib 
/usr/lib/libiconv.2.dylib: Mach-O universal binary with 3 architectures
/usr/lib/libiconv.2.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 
/usr/lib/libiconv.2.dylib (for architecture i386): Mach-O dynamically linked shared library i386 
/usr/lib/libiconv.2.dylib (for architecture ppc7400): Mach-O dynamically linked shared library ppc

Find the x68_64 and you are in luck !

Just remove the old dylib and link the new one like so :

rm /usr/local/lib/libiconv.2.dylib
ln -s /usr/lib/libiconv.2.dylib /usr/local/lib/libiconv.2.dylib

Now running the rvm install works great !

Kudos !!!

References:
http://hello.keewooi.com/ruby-1-9-3-preview-1-available-now/
http://forums.pragprog.com/forums/148/topics/9975
http://geekyninja.blogspot.de/2010/12/installing-ruby-192-with-rvm.html
http://stackoverflow.com/questions/5426892/trouble-installing-ruby-1-9-2-with-rvm-mac-os-x
http://stackoverflow.com/questions/3703792/ld-symbols-not-found-when-linking
http://stackoverflow.com/questions/7962550/error-installing-ruby-1-9-3
https://rvm.io/packages/readline/
http://stackoverflow.com/questions/8675194/error-installing-1-9-3-with-rvm-on-lion

Friday, July 27, 2012

API accesses

Its great how far technology has advanced in terms of providing access. Now-a-days one gets all the APIs to access any online application. They even document it so precisely and accurately. And if one wants a wrapper for languages like Ruby, Python, Perl and PHP then that is readily available as well.

Well I'm going to shortly discuss about some Gems I have used to access these APIs.

The first Gem that I purely awesome is the 'omniauth' Gem. It has all the routines necessary to build add-ons for other Omniauth Plugins. You might need to read more about oauth to get an idea. I managed to use the plugins - omniauth-twitteromniauth-facebookomniauth-linkedin and omniauth-instagram with this Gem. And its so easy.

You will have to add an omniauth.rb file in the initializers for starters.


Rails.application.config.middleware.use OmniAuth::Builder do 
 provider :linkedin, LINKEDIN_CONFIG['api_key'], LINKEDIN_CONFIG['secret_key']   
 provider :twitter, TWITTER_CONFIG['consumer_key'], TWITTER_CONFIG['consumer_secret'] 
 provider :facebook, FACEBOOK_CONFIG['app_id'], FACEBOOK_CONFIG['secret'], :scope => FACEBOOK_CONFIG['scope'], :display => 'popup' 
 provider :instagram, INSTAGRAM_CONFIG['client_id'], INSTAGRAM_CONFIG['client_secret'], :scope => INSTAGRAM_CONFIG['scope'] 
end


It might be a good idea to insert these configuration files on deployment, rather than put them in the git.

The you will have to call something like

current_user.services.create!(
 :provider => auth['provider'], 
 :uid => auth['uid'], 
 :token => (auth['credentials']['token'] rescue nil), 
 :secret => (auth['credentials']['secret'] rescue nil))

Better add in lots of Exception handling.

This is the authorization part. The idea is generally at the end of the authentication, you will get a token which you should use for further requests to these services. To access the data from these services, one would have to use other Gems and then pass the auth_token (and auth_secret).

The gems used to access the data are :
fb_graph Gem for Facebook
twitter Gem for Twitter
linkedin Gem for Linked In
instagram Gem for Instagram

Try them out. You will love them.

Monday, May 21, 2012

More Tips

It's nice to be learning new stuff. I'm just going to write line by line of some more stuff that I learned ..

Always use UTF-8 as the default charset in Mysql. 

I added this in my.cnf (/etc/)
[mysqld
default-character-set=utf8 
default-collation=utf8_general_ci 
character-set-server=utf8 
collation-server=utf8_general_ci 
init-connect='SET NAMES utf8' 
[client
default-character-set=utf8

If you are trying to sync between DBs, then take chunks of data.

Say you are trying to sync between DBs. And you want to do some transformation before sync (May be rename some columns). Then you would want to do a select query to get a subset of data (limit n) and then insert it in bulk using values (n values).

If there is a possibility of updates instead of inserts, then use the 'ON DUPLICATE KEY UPDATE' clause.
It's a good idea to update based on timestamps. What I did was that I touched a file everytime I do an update, so that the next time I know where I need to start from.

If a transaction fails because of Lock Errors write a retry routine in Code.

Lock might occur. Especially on DBs where there are a lot of reads/writes. So I would say try writing for x times and then raise an exception.

Optimize Active Admin if using with a DB with many records.

I have had this problem where the data in the table was HUGE and there were associations with another table with HUGE number of records. This caused the application to load very slowly and also made the writes to fail.

The fix that I made was to take out thos associations and write getters and setters instead of the associations. Some extra code, but make a HUGE difference to performance.

Use S3 when you want to store data.

The application had to generate reports and store them for future viewing. The flow in implementation is as follows


  1. User Creates Report
  2. He sets the report to execute on a specific time (he could also execute this report manually)
  3. On triggering of the report, a GeneratedReport is created and the job is pushed to delayed_job
  4. When the delayed_job gets to it, the report is executed and the result is written to a tmp file
  5. This tmp file is uploaded to S3 as a private file and a url is made with an expiration of months.

More to Come.


Friday, March 23, 2012

Learning Start

I flew from Bangalore to Berlin on the 1st of March, the day my Visa starts. The weather was cold and gloomy. But I met some Mallus on the way and had some socializing. I asked them about the eating habits, where I could save money and stuff.

It took me 17 hours to reach Tegel, Berlin. After that I took a cab and went to the work place. A great bunch of people greeted me and gave me the laptop and other stuff. I stayed with John for some days before he left for his vacation. John even got me a sim that I have on my phone now !

I managed to get some paper works done. I have got my insurance and Burgeramt as of now. I am waiting for the Work Visa approval, apartment for rent, bank account etc ..

Anna took me to a small bar where they played music and the people who came in sand and played instruments. A bunch of people did a lot of YoYo tricks as well ...

Something like this one


Im at Gregory's apartment for now and am looking out for a place to stay before Reshma comes.\


Rails Dev is going great too ... I learned a lot of stuff too ...

Some of them are

Rspec
Rvm
guard
simplecov
factorygirl
pow and powder
gem based coding and development
databasecleaner
forgery
airbrake
relic
scalarium
activeadmin
devise
vagrant
markdown

Most of them I knew before and understood more ....

Exciting times !!!!

Sunday, January 15, 2012

FB Likes

I have been trying to get the Facebook Like to appear on an application. It was getting somewhat stressful because the Like would just not take the whole URL, but would take the base URL.


I started using the XFBML first because the way to integrate it is so nice. 


Add 


<html xmlns:fb="http://ogp.me/ns/fb#"> 


as the namespace and then use 


<fb:like href="http://google.com/" send="false" layout="button_count" width="450" show_faces="false"></fb:like>


Easy enough. The application used a lot of Ajax and therefore the these likes need some refreshing. To do this I used the Javascript method available


fb.xfbml.parse()


Function and it started working. But the likes were happening only for the base URL and not the href in the fb:like. This is when I saw the statement in the documentation which said 'The XFBML version defaults to the current page.' if href is used. 


urghh


I went on to use the iframe way so that I could get likes for the particular URL. It seems its important that the url end with '/' and only then Facebook identifies the absolute URL.