c++ - Issue with buffer data in opengl = only draws if I buffer more bytes than needed -


here paste bins code main.cpp , shaders. uses devil, glload , glfw. runs on windows , linux. png named pic.png load.

i buffer data in normal way. simple triangle.

glgenbuffers(1, &vbo); glbindbuffer(gl_array_buffer, vbo); //vx     vy    vz   vw       nx   ny   nz     u    v         float bufferdatathree[9*3] = {   -1.0f, -1.0f, 0.0f,1.0f,    0.0f,1.0f,0.0f,  0.0f,0.0f, 1.0f, -1.0f, 0.0f,1.0f,    0.0f,1.0f,0.0f,  0.0f,1.0f, 1.0f,  1.0f, 0.0f,1.0f,    0.0f,1.0f,0.0f,  1.0f,1.0f}; //total 4 + 3 + 2 = 9;   glbufferdata(gl_array_buffer, (9*3)*4, bufferdatathree, gl_static_draw); //doesnt work //glbufferdata(gl_array_buffer, (10*3)*4, bufferdatathree, gl_static_draw); //works 

there 9*3 = 27 floats. therefore 108 bytes. if buffer 108 bytes screw texture coords. if buffer 116 bytes, (2 floats more) renders fine.

my display method.

void display() {     glclearcolor(1.0f, 1.0f, 1.0f, 1.0f);     glclear(gl_color_buffer_bit | gl_depth_buffer_bit);      gluseprogram(program);      glactivetexture(gl_texture0);     glbindtexture(gl_texture_2d,tbo);      glbindbuffer(gl_array_buffer, vbo);     glenablevertexattribarray(0);     glvertexattribpointer(0, 4, gl_float, gl_false, (4 + 3 + 2)*sizeof(float), 0);     glenablevertexattribarray(1);     glvertexattribpointer(1, 4, gl_float, gl_false, (4 + 3 + 2)*sizeof(float),(void*) (4*sizeof(float)));     glenablevertexattribarray(2);     glvertexattribpointer(2, 4, gl_float, gl_false, (4 + 3 + 2)*sizeof(float),(void*) ((4+3)*sizeof(float)));      gldrawarrays(gl_triangles,0,3);     gldisablevertexattribarray(0);     gluseprogram(0);      glfwswapbuffers(); } 

how can happening?

second argument glvertexattribpointer number of components, texture coord 2 , 3 normal.


Comments

Popular posts from this blog

blackberry 10 - how to add multiple markers on the google map just by url? -

php - guestbook returning database data to flash -

delphi - Dynamic file type icon -